IO, CREATE
/INSERT
, and External Data¶
Setup¶
In [1]:
import ibis
import os
hdfs_port = int(os.environ.get('IBIS_TEST_WEBHDFS_PORT', 50070))
user = os.environ.get('IBIS_TEST_WEBHDFS_USER', 'ubuntu')
hdfs = ibis.hdfs_connect(host='quickstart.cloudera', user=user, port=hdfs_port)
con = ibis.impala.connect(host='quickstart.cloudera', database='ibis_testing',
hdfs_client=hdfs)
ibis.options.interactive = True
Creating new Impala tables from Ibis expressions¶
Suppose you have an Ibis expression that produces a table:
In [2]:
table = con.table('functional_alltypes')
t2 = table[table, (table.bigint_col - table.int_col).name('foo')]
expr = (t2
[t2.bigint_col > 30]
.group_by('string_col')
.aggregate([t2.foo.min().name('min_foo'),
t2.foo.max().name('max_foo'),
t2.foo.sum().name('sum_foo')]))
expr
Out[2]:

To create a table in the database from the results of this expression,
use the connection’s create_table
method:
In [3]:
con.create_table('testing_table', expr, database='ibis_testing')
By default, this creates a table stored as Parquet format in HDFS. Support for views, external tables, configurable file formats, and so forth, will come in the future. Feedback on what kind of interface would be useful for that would help.
In [4]:
con.table('testing_table')
Out[4]:

Tables can be similarly dropped with drop_table
In [5]:
con.drop_table('testing_table', database='ibis_testing')
Inserting data into existing Impala tables¶
The client’s insert
method can append new data to an existing table
or overwrite the data that is in there.
In [6]:
con.create_table('testing_table', expr)
con.table('testing_table')
Out[6]:

In [7]:
con.insert('testing_table', expr)
con.table('testing_table')
Out[7]:

In [8]:
con.drop_table('testing_table')
Uploading / downloading data from HDFS¶
If you’ve set up an HDFS connection, you can use the Ibis HDFS interface to look through your data and read and write files to and from HDFS:
In [9]:
hdfs = con.hdfs
hdfs.ls('/__ibis/ibis-testing-data')
Out[9]:
['avro',
'awards_players.csv',
'awards_players.parquet',
'batting.csv',
'batting.parquet',
'csv',
'diamonds.csv',
'diamonds.parquet',
'functional_alltypes.csv',
'functional_alltypes.parquet',
'ibis_testing.db',
'parquet',
'udf']
In [10]:
hdfs.ls('/__ibis/ibis-testing-data/parquet')
Out[10]:
['functional_alltypes',
'tpch_customer',
'tpch_lineitem',
'tpch_nation',
'tpch_orders',
'tpch_part',
'tpch_partsupp',
'tpch_region',
'tpch_supplier']
Suppose we wanted to download
/__ibis/ibis-testing-data/parquet/functional_alltypes
, which is a
directory. We need only do:
In [11]:
!rm -rf parquet_dir/
hdfs.get('/__ibis/ibis-testing-data/parquet/functional_alltypes', 'parquet_dir')
Out[11]:
'/ibis/docs/source/notebooks/tutorial/parquet_dir'
Now we have that directory locally:
In [12]:
!ls parquet_dir/
9a41de519352ab07-4e76bc4d9fb5a789_1624886651_data.0.parq
9a41de519352ab07-4e76bc4d9fb5a78a_778826485_data.0.parq
9a41de519352ab07-4e76bc4d9fb5a78b_1277612014_data.0.parq
Files and directories can be written to HDFS just as easily using
put
:
In [13]:
path = '/__ibis/dir-write-example'
if hdfs.exists(path):
hdfs.rmdir(path)
hdfs.put(path, 'parquet_dir', verbose=True)
Out[13]:
'/__ibis/dir-write-example'
In [14]:
hdfs.ls('/__ibis/dir-write-example')
Out[14]:
['9a41de519352ab07-4e76bc4d9fb5a789_1624886651_data.0.parq',
'9a41de519352ab07-4e76bc4d9fb5a78a_778826485_data.0.parq',
'9a41de519352ab07-4e76bc4d9fb5a78b_1277612014_data.0.parq']
Delete files with rm
or directories with rmdir
:
In [15]:
hdfs.rmdir('/__ibis/dir-write-example')
In [16]:
!rm -rf parquet_dir/
Queries on Parquet, Avro, and Delimited files in HDFS¶
Ibis can easily create temporary or persistent Impala tables that reference data in the following formats:
- Parquet (
parquet_file
) - Avro (
avro_file
) - Delimited text formats (CSV, TSV, etc.) (
delimited_file
)
Parquet is the easiest because the schema can be read from the data files:
In [17]:
path = '/__ibis/ibis-testing-data/parquet/tpch_lineitem'
lineitem = con.parquet_file(path)
lineitem.limit(2)
Out[17]:

In [18]:
lineitem.l_extendedprice.sum()
Out[18]:

If you want to query a Parquet file and also create a table in Impala
that remains after your session, you can pass more information to
parquet_file
:
In [19]:
table = con.parquet_file(path, name='my_parquet_table',
database='ibis_testing',
persist=True)
table.l_extendedprice.sum()
Out[19]:

In [20]:
con.table('my_parquet_table').l_extendedprice.sum()
Out[20]:

In [21]:
con.drop_table('my_parquet_table')
To query delimited files, you need to write down an Ibis schema. At some point we’d like to build some helper tools that will infer the schema for you, all in good time.
There’s some CSV files in the test folder, so let’s use those:
In [22]:
hdfs.get('/__ibis/ibis-testing-data/csv', 'csv-files')
Out[22]:
'/ibis/docs/source/notebooks/tutorial/csv-files'
In [23]:
!cat csv-files/0.csv
63IEbRheTh,0.679388707915,6
mG4hlqnjeG,2.80710565922,15
JTPdX9SZH5,-0.155126406372,55
2jcl6FypOl,1.03787834032,21
k3TbJLaadQ,-1.40190801103,23
rP5J4xvinM,-0.442092712869,22
WniUylixYt,-0.863748033806,27
znsDuKOB1n,-0.566029637098,47
4SRP9jlo1M,0.331460412318,88
KsfjPyDf5e,-0.578930506363,70
In [24]:
!rm -rf csv-files/
The schema here is pretty simple (see ibis.schema
for more):
In [25]:
schema = ibis.schema([('foo', 'string'),
('bar', 'double'),
('baz', 'int32')])
table = con.delimited_file('/__ibis/ibis-testing-data/csv',
schema)
table.limit(10)
Out[25]:

In [26]:
table.bar.summary()
Out[26]:

For functions like parquet_file
and delimited_file
, an HDFS
directory must be passed (we’ll add support for S3 and other filesystems
later) and the directory must contain files all having the same schema.
If you have Avro data, you can query it too if you have the full avro schema:
In [27]:
avro_schema = {
"fields": [
{"type": ["int", "null"], "name": "R_REGIONKEY"},
{"type": ["string", "null"], "name": "R_NAME"},
{"type": ["string", "null"], "name": "R_COMMENT"}],
"type": "record",
"name": "a"
}
path = '/__ibis/ibis-testing-data/avro/tpch.region'
hdfs.mkdir(path)
table = con.avro_file(path, avro_schema)
table
Out[27]:

Other helper functions for interacting with the database¶
We’re adding a growing list of useful utility functions for interacting with an Impala cluster on the client object. The idea is that you should be able to do any database-admin-type work with Ibis and not have to switch over to the Impala SQL shell. Any ways we can make this more pleasant, please let us know.
Here’s some of the features, which we’ll give examples for:
- Listing and searching for available databases and tables
- Creating and dropping databases
- Getting table schemas
In [28]:
con.list_databases(like='ibis*')
Out[28]:
['ibis_testing']
In [29]:
con.list_tables(database='ibis_testing', like='tpch*')
Out[29]:
['tpch_customer',
'tpch_lineitem',
'tpch_nation',
'tpch_orders',
'tpch_part',
'tpch_partsupp',
'tpch_region',
'tpch_region_avro',
'tpch_supplier']
In [30]:
schema = con.get_schema('functional_alltypes')
schema
Out[30]:
ibis.Schema {
id int32
bool_col boolean
tinyint_col int8
smallint_col int16
int_col int32
bigint_col int64
float_col float
double_col double
date_string_col string
string_col string
timestamp_col timestamp
year int32
month int32
}
Databases can be created, too, and you can set the storage path in HDFS you want for the data files
In [31]:
db = 'ibis_testing2'
con.create_database(db, path='/__ibis/my-test-database', force=True)
# you may or may not have to give the impala user write and execute permissions to '/__ibis/my-test-database'
hdfs.chmod('/__ibis/my-test-database', '777')
In [32]:
con.create_table('example_table', con.table('functional_alltypes'),
database=db, force=True)
Hopefully, there will be data files in the indicated spot in HDFS:
In [33]:
hdfs.ls('/__ibis/my-test-database')
Out[33]:
['example_table']
To drop a database, including all tables in it, you can use
drop_database
with force=True
:
In [34]:
con.drop_database(db, force=True)
Dealing with Partitioned tables in Impala¶
Placeholder: This is not yet implemented. If you have use cases, please let us know.
Faster queries on small data in Impala¶
Since Impala internally uses LLVM to compile parts of queries (aka “codegen”) to make them faster on large data sets there is a certain amount of overhead with running many kinds of queries, even on small datasets. You can disable LLVM code generation when using Ibis, which may significantly speed up queries on smaller datasets:
In [35]:
from numpy.random import rand
In [36]:
con.disable_codegen()
In [37]:
t = con.table('ibis_testing.functional_alltypes')
%timeit (t.double_col + rand()).sum().execute()
29.3 ms ± 4.59 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [38]:
# Turn codegen back on
con.disable_codegen(False)
In [39]:
%timeit (t.double_col + rand()).sum().execute()
84.4 ms ± 7.47 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
It’s important to remember that codegen is a fixed overhead and will significantly speed up queries on big data