Blog

  • Fév 23 2015

    NoSQL with PostgreSQL 9.4 and JSONB

    The introduction of the JSONB data type in PostgreSQL, definitely makes the “NoSQ” side of this relational...  

  • Fév 09 2015

    Back From FOSDEM 2015

    The FOSDEM 2015 edition has been awesome this year, the usual mix of meeting with old friends, talking about interesting topics, seeing tremendous activity in all Open Source domains, and having Belgium beers in the evenings.

    FOSDEM PGDAY

    On the Friday before the real FOSDEM event our own PostgreSQL Europe organized a one-day event, the FOSDEM PGDAY. It as an intense day of conferences about PostgreSQL, where I had the opportunity to present pgloader in the context of dealing with database migrations.

    Migrate from MySQL to PostgreSQL in one command

    PostgreSQL User Group, Paris Meetup

    This presentation about migrating to PostgreSQL was also given at the PostgreSQL User Group Meetup in Paris more recently, and I'm happy to announce here that we have more than 200 registered members in the group now!

    Check out our next meetup which is already scheduled!

    FOSDEM

    At the FOSDEM event proper I had the pleasure to present my recent talk about backups:

    Nobody cares about backups, think about data recovery

    If you want to remember only one thing about that presentation, it must be that we don't care about how you take backups, we only care about if you're able to recover data in worst case scenarios. The only to check a backup is to recover it. Do automated testing of your backups, which means automated recovery.

     

  • Jan 26 2015

    Incremental backup with Barman 1.4.0

    Today version 1.4.0 of Barman has been officially released. The most important feature is incremental backup support,...  

  • Jan 23 2015

    How monitoring of WAL archiving improves with PostgreSQL 9.4 and pg_stat_archiver

    PostgreSQL 9.4 introduces a new statistic in the catalogue, called pg_stat_archiver. Thanks to the SQL language it...  

  • Jan 22 2015

    My First Slashdot Effect

    Thanks to the Postgres Weekly issue #89 and a post to Hacker News front page (see Pgloader: A High-speed PostgreSQL Swiss Army Knife, Written in Lisp it well seems that I just had my first Slashdot effect...

    Well actually you know what? I don't...

    So please consider using the new mirror http://dimitri.github.io/pgloader/ and maybe voting on Hacker News for either tooling around your favorite database system, PostgreSQL or your favorite programming language, Common Lisp...

    It all happens at https://news.ycombinator.com/item?id=8924270.

    Coming to FOSDEM?

    If you want to know more about pgloader and are visiting FOSDEM PGDAY or plain FOSDEM I'll be there talking about Migrating to PostgreSQL, the new story (that's pgloader) and about some more reasons why You'd better have tested backups...

    If you're not there on the Friday but still want to talk about pgloader, join us at the PostgreSQL devroom and booth!

     

  • Jan 16 2015

    New release: pgloader 3.2

    PostgreSQL comes with an awesome bulk copy protocol and tooling best known as the COPY and \copy commands. Being a transactional system, PostgreSQL COPY implementation will ROLLBACK any work done if a single error is found in the data set you're importing. That's the reason why pgloader got started: it provides with error handling for the COPY protocol.

    That's basically what pgloader used to be all about

    As soon as we have the capability to load data from unreliable sources, another use case appears on the horizon, and soon enough pgloader grew the capacity to load data from other databases, some having a more liberal notion of what is sane data type input.

    To be able to adapt to advanced use cases in database data migration support, pgloader has grown an advanced command language wherein you can define your own load-time data projection and transformations, and your own type casting rules too.

    New in version 3.2 is that in simple cases, you don't need that command file any more. Check out the pgloader quick start page to see some examples where you can use pgloader all from your command line!

    Here's one such example, migrating a whole MySQL database data set over to PostgreSQL, including automated schema discovery, automated type casting and on-the-fly data cleanup (think about zero dates or booleans in tinyint(1) disguise), support for indexes, primary keys, foreign keys and comments. It's as simple as:

    $ createdb sakila
    $ pgloader mysql://root@localhost/sakila pgsql:///sakila
    2015-01-16T09:49:36.068000+01:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'
    2015-01-16T09:49:36.074000+01:00 LOG Data errors in '/private/tmp/pgloader/'
                        table name       read   imported     errors            time
    ------------------------------  ---------  ---------  ---------  --------------
                   fetch meta data         43         43          0          0.222s
                      create, drop          0         36          0          0.130s
    ------------------------------  ---------  ---------  ---------  --------------
                             actor        200        200          0          0.133s
                           address        603        603          0          0.035s
                          category         16         16          0          0.027s
                              city        600        600          0          0.018s
                           country        109        109          0          0.017s
                          customer        599        599          0          0.035s
                              film       1000       1000          0          0.075s
                        film_actor       5462       5462          0          0.147s
                     film_category       1000       1000          0          0.035s
                         film_text       1000       1000          0          0.053s
                         inventory       4581       4581          0          0.086s
                          language          6          6          0          0.041s
                           payment      16049      16049          0          0.436s
                            rental      16044      16044          0          0.474s
                             staff          2          2          0          0.170s
                             store          2          2          0          0.010s
            Index Build Completion          0          0          0          0.000s
    ------------------------------  ---------  ---------  ---------  --------------
                    Create Indexes         40         40          0          0.343s
                   Reset Sequences          0         13          0          0.026s
                      Primary Keys         16         14          2          0.013s
                      Foreign Keys         22         22          0          0.078s
                          Comments          0          0          0          0.000s
    ------------------------------  ---------  ---------  ---------  --------------
                 Total import time      47273      47273          0          2.261s
    

    Other options are available to support a variety of input file formats, including compressed csv files found on a remote location, as in:

    curl http://pgsql.tapoueh.org/temp/2013_Gaz_113CDs_national.txt.gz \
        | gunzip -c                                                        \
        | pgloader --type csv                                              \
                   --field "usps,geoid,aland,awater,aland_sqmi,awater_sqmi,intptlat,intptlong" \
                   --with "skip header = 1"                                \
                   --with "fields terminated by '\t'"                      \
                   -                                                       \
                   postgresql:///pgloader?districts_longlat
    
    2015-01-16T10:09:06.027000+01:00 LOG Main logs in '/private/tmp/pgloader/pgloader.log'
    2015-01-16T10:09:06.032000+01:00 LOG Data errors in '/private/tmp/pgloader/'
                        table name       read   imported     errors            time
    ------------------------------  ---------  ---------  ---------  --------------
                             fetch          0          0          0          0.010s
    ------------------------------  ---------  ---------  ---------  --------------
                 districts_longlat        440        440          0          0.087s
    ------------------------------  ---------  ---------  ---------  --------------
                 Total import time        440        440          0          0.097s
    

    As usual in unix commands, the - input filename stands for standard input and allows streaming data from a remote compressed file down to PostgreSQL.

    So if you have any data loading job, including data migrations from SQLite, MySQL or MS SQL server: have a look at pgloader!

     

  • Jan 08 2015

    The CHECK clause for updatable views

    Written by Giuseppe Broccolo  First published in Italian   Since PostgreSQL 9.3, it is possible to update...  

  • Déc 11 2014

    Japan PostgreSQL Conference 2014

    Japan has been an early and vigorous adopter of PostgreSQL (back in 2006, when PostgreSQL was still...  

  • Déc 08 2014

    BDR for PostgreSQL: Present and future

    For a couple of years now a team at 2ndQuadrant led by Andres Freund have been working...  

  • Déc 04 2014

    PostgreSQL 9.4 for administrators (part two)

      Written by Francesco Canovai First published in Italian    In the previous instalment, we introduced the logical...  

© 2001-2015 2ndQuadrant Ltd. All rights reserved. | Privacy Policy