Overview
This section introduces you to several common operations such as starting a cluster, working with tables (load, query, update), working with streams and running approximate queries.
Running the Examples: Topics in this section refer to source code examples that are shipped with the product. Instructions to run these examples can be found in the source code.
Source code for these examples is located in the quickstart/src/main/scala/org/apache/spark/examples/snappydata and in quickstart/python directories of the TIBCO ComputeDB product distribution.
You can run the examples in any of the following ways:
-
In the Local Mode: By using the
bin/run-example
script (to run Scala examples) or by using thebin/spark-submit
script (to run Python examples). These examples run colocated with Spark + Snappy Store in the same JVM. -
As a Job: Many of the Scala examples are also implemented as a Snappy job. In this case, examples can be submitted as a job to a running TIBCO ComputeDB cluster. Refer to the jobs section for details on how to run a job.
Note
TIBCO ComputeDB also supports Java API. Refer to the documentation for more details on Java API.
The following topics are covered in this section:
-
How to Access TIBCO ComputeDB Store from an existing Spark Installation using Smart Connector
-
How to Load Data from External Data Stores (e.g. HDFS, Cassandra, Hive, etc)
-
How to use Approximate Query Processing (AQP) to Run Approximate Queries
-
How to import data from a Hive Table into a TIBCO ComputeDB Table
-
How to Connect TIBCO® Data Virtualization to TIBCO ComputeDB
-
How to Configure Apache Zeppelin to Securely and Concurrently access the TIBCO ComputeDB Cluster