import ibis
import os
hdfs_port = os.environ.get('IBIS_WEBHDFS_PORT', 50070)
hdfs = ibis.hdfs_connect(host='quickstart.cloudera', port=hdfs_port)
con = ibis.impala.connect(host='quickstart.cloudera', database='ibis_testing',
                          hdfs_client=hdfs)
ibis.options.interactive = True

Creating new Impala tables from Ibis expressions

Suppose you have an Ibis expression that produces a table:

table = con.table('functional_alltypes')

t2 = table[table, (table.bigint_col - table.int_col).name('foo')]

expr = (t2
        [t2.bigint_col > 30]
        .group_by('string_col')
        .aggregate([t2.foo.min().name('min_foo'),
                    t2.foo.max().name('max_foo'),
                    t2.foo.sum().name('sum_foo')]))
expr
  string_col  min_foo  max_foo  sum_foo
0          6       54       54    39420
1          4       36       36    26280
2          7       63       63    45990
3          8       72       72    52560
4          5       45       45    32850
5          9       81       81    59130

To create a table in the database from the results of this expression, use the connection’s create_table method:

con.create_table('testing_table', expr, database='ibis_testing')

By default, this creates a table stored as Parquet format in HDFS. Support for views, external tables, configurable file formats, and so forth, will come in the future. Feedback on what kind of interface would be useful for that would help.

con.table('testing_table')
  string_col  min_foo  max_foo  sum_foo
0          9       81       81    59130
1          8       72       72    52560
2          5       45       45    32850
3          6       54       54    39420
4          4       36       36    26280
5          7       63       63    45990

Tables can be similarly dropped with drop_table

con.drop_table('testing_table', database='ibis_testing')

Inserting data into existing Impala tables

The client’s insert method can append new data to an existing table or overwrite the data that is in there.

con.create_table('testing_table', expr)
con.table('testing_table')
  string_col  min_foo  max_foo  sum_foo
0          8       72       72    52560
1          5       45       45    32850
2          6       54       54    39420
3          4       36       36    26280
4          7       63       63    45990
5          9       81       81    59130
con.insert('testing_table', expr)
con.table('testing_table')
   string_col  min_foo  max_foo  sum_foo
0           9       81       81    59130
1           6       54       54    39420
2           4       36       36    26280
3           7       63       63    45990
4           8       72       72    52560
5           5       45       45    32850
6           9       81       81    59130
7           8       72       72    52560
8           5       45       45    32850
9           6       54       54    39420
10          4       36       36    26280
11          7       63       63    45990
con.drop_table('testing_table')

Uploading / downloading data from HDFS

If you’ve set up an HDFS connection, you can use the Ibis HDFS interface to look through your data and read and write files to and from HDFS:

hdfs.ls('/')
[u'__ibis', u'hbase', u'home', u'test-warehouse', u'tmp', u'user']
hdfs = con.hdfs
hdfs.ls('/__ibis/ibis-testing-data')
[u'avro', u'csv', u'ibis_testing.db', u'parquet', u'udf']
hdfs.ls('/__ibis/ibis-testing-data/parquet')
[u'functional_alltypes',
 u'tpch_ctas_cancel',
 u'tpch_customer',
 u'tpch_lineitem',
 u'tpch_nation',
 u'tpch_orders',
 u'tpch_part',
 u'tpch_partsupp',
 u'tpch_region',
 u'tpch_supplier']

Suppose we wanted to download /__ibis/ibis-testing-data/parquet/functional_alltypes, which is a directory. We need only do:

!rm -rf parquet_dir/
hdfs.get('/__ibis/ibis-testing-data/parquet/functional_alltypes', 'parquet_dir')
'/home/wesm/code/ibis-notebooks/basic-tutorial/parquet_dir'

Now we have that directory locally:

!ls parquet_dir/
244b1a31ffc1d401-54bea73c44789d_884230467_data.0.parq
244b1a31ffc1d401-54bea73c44789e_1434316465_data.0.parq
244b1a31ffc1d401-54bea73c44789f_1434316465_data.0.parq

Files and directories can be written to HDFS just as easily using put:

path = '/__ibis/dir-write-example'
if hdfs.exists(path):
    hdfs.rmdir(path)
hdfs.put(path, 'parquet_dir', verbose=True)
'/__ibis/dir-write-example'
hdfs.ls('/__ibis/dir-write-example')
[u'244b1a31ffc1d401-54bea73c44789d_884230467_data.0.parq',
 u'244b1a31ffc1d401-54bea73c44789e_1434316465_data.0.parq',
 u'244b1a31ffc1d401-54bea73c44789f_1434316465_data.0.parq']

Delete files with rm or directories with rmdir:

hdfs.rmdir('/__ibis/dir-write-example')
!rm -rf parquet_dir/

Queries on Parquet, Avro, and Delimited files in HDFS

Ibis can easily create temporary or persistent Impala tables that reference data in the following formats:

  • Parquet (parquet_file)
  • Avro (avro_file)
  • Delimited text formats (CSV, TSV, etc.) (delimited_file)

Parquet is the easiest because the schema can be read from the data files:

path = '/__ibis/ibis-testing-data/parquet/tpch_lineitem'

lineitem = con.parquet_file(path)
lineitem.limit(2)
   l_orderkey  l_partkey  l_suppkey  l_linenumber l_quantity l_extendedprice  0           1     155190       7706             1      17.00        21168.23
1           1      67310       7311             2      36.00        45983.16

  l_discount l_tax l_returnflag l_linestatus  l_shipdate l_commitdate  0       0.04  0.02            N            O  1996-03-13   1996-02-12
1       0.09  0.06            N            O  1996-04-12   1996-02-28

  l_receiptdate     l_shipinstruct l_shipmode  0    1996-03-22  DELIVER IN PERSON      TRUCK
1    1996-04-20   TAKE BACK RETURN       MAIL

                            l_comment
0             egular courts above the
1  ly final dependencies: slyly bold
lineitem.l_extendedprice.sum()
Decimal('229577310901.20')

If you want to query a Parquet file and also create a table in Impala that remains after your session, you can pass more information to parquet_file:

table = con.parquet_file(path, name='my_parquet_table',
                         database='ibis_testing',
                         persist=True)
table.l_extendedprice.sum()
Decimal('229577310901.20')
con.table('my_parquet_table').l_extendedprice.sum()
Decimal('229577310901.20')
con.drop_table('my_parquet_table')

To query delimited files, you need to write down an Ibis schema. At some point we’d like to build some helper tools that will infer the schema for you, all in good time.

There’s some CSV files in the test folder, so let’s use those:

hdfs.get('/__ibis/ibis-testing-data/csv', 'csv-files')
'/home/wesm/code/ibis-notebooks/basic-tutorial/csv-files'
!cat csv-files/0.csv
uXpivkAYSO,1.01994339956,27
BcIteg32mR,-0.497745040687,11
5TZQvBHcwI,-0.26731402764,84
XX8upUlbDe,-0.0709191028435,96
sJdk6chNnx,-0.27102973984,60
NTVYJb1d7D,-1.23610658876,14
3pX0MQsLIz,-0.737931321383,16
IpNIwdTK4P,-0.293743595331,23
ucIvA79467,0.0707646026785,12
JIunzB1CZs,-1.1253763919,4
!rm -rf csv-files/

The schema here is pretty simple (see ibis.schema for more):

schema = ibis.schema([('foo', 'string'),
                      ('bar', 'double'),
                      ('baz', 'int32')])

table = con.delimited_file('/__ibis/ibis-testing-data/csv',
                           schema)
table.limit(10)
          foo       bar  baz
0  uXpivkAYSO  1.019943   27
1  BcIteg32mR -0.497745   11
2  5TZQvBHcwI -0.267314   84
3  XX8upUlbDe -0.070919   96
4  sJdk6chNnx -0.271030   60
5  NTVYJb1d7D -1.236107   14
6  3pX0MQsLIz -0.737931   16
7  IpNIwdTK4P -0.293744   23
8  ucIvA79467  0.070765   12
9  JIunzB1CZs -1.125376    4
table.bar.summary()
   count  nulls       min       max        sum      mean  approx_nunique
0    100      0 -1.236107  1.019943 -34.094578 -0.340946              10

For functions like parquet_file and delimited_file, an HDFS directory must be passed (we’ll add support for S3 and other filesystems later) and the directory must contain files all having the same schema.

If you have Avro data, you can query it too if you have the full avro schema:

avro_schema = {
    "fields": [
        {"type": ["int", "null"], "name": "R_REGIONKEY"},
        {"type": ["string", "null"], "name": "R_NAME"},
        {"type": ["string", "null"], "name": "R_COMMENT"}],
    "type": "record",
    "name": "a"
}

table = con.avro_file('/__ibis/ibis-testing-data/avro/tpch.region', avro_schema)
table
Empty DataFrame
Columns: [r_regionkey, r_name, r_comment]
Index: []

Other helper functions for interacting with the database

We’re adding a growing list of useful utility functions for interacting with an Impala cluster on the client object. The idea is that you should be able to do any database-admin-type work with Ibis and not have to switch over to the Impala SQL shell. Any ways we can make this more pleasant, please let us know.

Here’s some of the features, which we’ll give examples for:

  • Listing and searching for available databases and tables
  • Creating and dropping databases
  • Getting table schemas
con.list_databases(like='ibis*')
['ibis_testing']
con.list_tables(database='ibis_testing', like='tpch*')
['tpch_ctas_cancel',
 'tpch_customer',
 'tpch_lineitem',
 'tpch_nation',
 'tpch_orders',
 'tpch_part',
 'tpch_partsupp',
 'tpch_region',
 'tpch_region_avro',
 'tpch_supplier']
schema = con.get_schema('functional_alltypes')
schema
ibis.Schema {
  id               int32
  bool_col         boolean
  tinyint_col      int8
  smallint_col     int16
  int_col          int32
  bigint_col       int64
  float_col        float
  double_col       double
  date_string_col  string
  string_col       string
  timestamp_col    timestamp
  year             int32
  month            int32
}

Databases can be created, too, and you can set the storage path in HDFS you want for the data files

db = 'ibis_testing2'
con.create_database(db, path='/__ibis/my-test-database')
con.create_table('example_table', con.table('functional_alltypes'),
                 database=db)

Hopefully, there will be data files in the indicated spot in HDFS:

hdfs.ls('/__ibis/my-test-database')
[u'example_table']

To drop a database, including all tables in it, you can use drop_database with force=True:

con.drop_database(db, force=True)

Dealing with Partitioned tables in Impala

Placeholder: This is not yet implemented. If you have use cases, please let us know.

Faster queries on small data in Impala

Since Impala internally uses LLVM to compile parts of queries (aka “codegen”) to make them faster on large data sets there is a certain amount of overhead with running many kinds of queries, even on small datasets. You can disable LLVM code generation when using Ibis, which may significantly speed up queries on smaller datasets:

from numpy.random import rand
con.disable_codegen()
t = con.table('ibis_testing.functional_alltypes')

%timeit (t.double_col + rand()).sum().execute()
10 loops, best of 3: 47.2 ms per loop
# Turn codegen back on
con.disable_codegen(False)
%timeit (t.double_col + rand()).sum().execute()
1 loop, best of 3: 466 ms per loop

It’s important to remember that codegen is a fixed overhead and will significantly speed up queries on big data