CREATE EXTERNAL TABLE

CREATE EXTERNAL TABLE [IF NOT EXISTS] [schema_name.]table_name
    [( column-definition    [ , column-definition  ] * )]
    USING datasource
     [OPTIONS (key1 val1, key2 val2, ...)]

For more information on column-definition, refer to Column Definition For Column Table.

Refer to these sections for more information on Creating Table, Creating Sample Table, Creating Temporary Table and Creating Stream Table.

EXTERNAL

External tables point to external data sources. TIBCO ComputeDB supports all the data sources supported by Spark. You should use external tables to load data in parallel from any of the external sources. The table definition is persisted in the catalog and visible across all sessions.

USING <data source>

Specify the file format to use for this table. The data source may be one of TEXT, CSV, JSON, JDBC, PARQUET, ORC, and LIBSVM, or a fully qualified class name of a custom implementation of org.apache.spark.sql.sources.DataSourceRegister.
Note that most of the prominent datastores provide an implementation of 'DataSource' and accessible as a table. For instance, you can use the Cassandra spark package to create external tables pointing to Cassandra tables and directly run queries on them. You can mix any external table and TIBCO ComputeDB managed tables in your queries.

Example

Create an external table using PARQUET data source on local filesystem

snappy> CREATE EXTERNAL TABLE STAGING_AIRLINE USING parquet OPTIONS(path '../../quickstart/data/airlineParquetData');

Create an external table using CSV data source on local filesystem

CREATE EXTERNAL TABLE IF NOT EXISTS CUSTOMER_STAGING USING csv OPTIONS(path '../../quickstart/src/main/resources/customer.csv');
CREATE EXTERNAL TABLE CUSTOMER_STAGING_1 (C_CUSTKEY INTEGER NOT NULL, C_NAME VARCHAR(25) NOT NULL, 
C_ADDRESS VARCHAR(40) NOT NULL, C_NATIONKEY INTEGER NOT NULL, C_PHONE VARCHAR(15) NOT NULL, 
C_ACCTBAL DECIMAL(15,2) NOT NULL, C_MKTSEGMENT VARCHAR(10) NOT NULL, C_COMMENT VARCHAR(117) NOT NULL) 
USING csv OPTIONS (path '../../quickstart/src/main/resources/customer.csv');

You can also load data from AWS S3 buckets, as given in the example below:

CREATE EXTERNAL TABLE NYCTAXI USING parquet OPTIONS(path 's3a://<AWS_SECRET_KEY>:<AWS_SECRET_ID>@<folder>/<data>');

Specifying AWS credentials to access S3 buckets

Providing AWS credentials explicitly in the path URL may not be advisable. There are alternative ways in which you can specify these credentials:

  • Provide the AWS credentials as properties in conf/leads file at the time of launching the cluster.

    For example, append these properties with appropriate values for each entry in your conf/leads file.

    -spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem  -spark.hadoop.fs.s3a.access.key=<access-key-id> -spark.hadoop.fs.s3a.secret.key=<secret-access-key>
    
  • Provide the AWS credentials as environment variables in conf/spark-env.sh. You can add these two entries with appropriate values in that file before launching the cluster.

    export AWS_ACCESS_KEY_ID=<access-key-id>
    export AWS_SECRET_ACCESS_KEY=<secret-access-key>
    
  • Attach an IAM role with appropriate permissions to the instance(s) where the cluster is setup. To do this, go to EC2 dashboard page on AWS console, select and right-click your EC2 instance. Select Instance Settings > Attach/Replace IAM Role. From the drop-down list select the appropriate IAM role. If you do not see any IAM role in the list, you need to create one. Refer to AWS documentation for more details.

    Note

    This option is applicable only if your cluster is running on AWS EC2 instance(s). Also, this may not work if your S3 buckets are created in regions where AWS signature version 2 is not supported.

    When you attach an IAM role to an instance, a temporary set of credentials are generated by AWS for the instance(s). These are then picked up by the cluster while accessing the S3 buckets.

    Then provide the following configuration in conf/spark-defaults.conf file before launching the cluster.

    spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
    spark.hadoop.fs.s3a.aws.credentials.provider=com.amazonaws.auth.InstanceProfileCredentialsProvider
    

Related Topics