selects all rows from pokes table into a local directory. To do so, right-click on the offline registry you want to edit > click New > Key. ins.className = 'adsbygoogle ezasloaded'; Commands for DayZ Hive - Nitradopedia EN - Nitrado Gameserver $HIVE_HOME/bin/hive --service hiveserver2 & nohup hiveserver2 & nohup hive --service hiveserver2 & You will get the warnings which can be neglected. Cloudera does not currently support using the Thrift HTTP protocol to connect Beeline to HiveServer2 (meaning that you cannot set hive.server2.transport.mode=http). How to Create a Table in Hive - Knowledge Base by phoenixNAP In the Server field, provide the URL or IP address of the target server. Connect and share knowledge within a single location that is structured and easy to search. The default HMS heap memory below applies to Hadoop (Hive), Spark, and Presto clusters that are running Hive metastore version 2.3 or later. var ins = document.createElement('ins'); Please send them with any bugs (of which there are many!) 0. Starting the Hive Server: The first step is to format the Namenode. All release versions are in branches named "branch-0.#" or "branch-1.#"or the upcoming "branch-2.#", with the exception of release 0.8.1 which is in "branch-0.8-r2". Staging Ground Beta 1 Recap, and Reviewers needed for Beta 2, Hive Server 2 Hangs On Start / Won't Start, How to connect with Hiveserver2 using Python 3.4.5, hive reach max worker and cannot connect to hiveserver2, hiveserver2 org.apache.thrift.transport.TTransportException error when running 2nd query after minute of inactivity. More information can be found by clicking on -H or -help. For information about WebHCat errors and logging, see Error Codes and Responses and Log Files in the WebHCat manual. rev2023.3.3.43278. From Hive 2.1.0 onwards (withHIVE-13027), Hive uses Log4j2's asynchronous logger by default. selects the sum of a column. Execution logs are invaluable for debugging run-time errors. To configure Hive with Hadoop, you must first edit the hive-env.sh file in the $HIVE_HOME/conf directory. Logging during Hive execution on a Hadoop cluster is controlled by Hadoop configuration. How to Connect to Postgres Database Server - CommandPrompt Inc. To configure Derby, Hive must be informed of where the database is stored. at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2154) HiveServer2 (introduced in Hive 0.11) has its own CLI called Beeline. Partitioned tables must always have a partition selected in the WHERE clause of the statement. net stop mysql net start mysql mysql -u root -p update user set authentication_string=password('123456') where user='root'; ERROR 1046 (3D000): No database selected net stop mysql net start mysql mysql -u root -p . Also note that local mode execution is done in a separate, child jvm (of the Hive client). Go to the command line of the Hive server and start hiveserver2 docker exec -it 60f2c3b5eb32 bash hiveserver2 Maybe a little check that something is listening on port 10000 now netstat -anp | grep 10000 tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 446/java Okay. The --force option has no effect if server stop was already invoked. selects all rows from partition ds=2008-08-15 of the invites table into an HDFS directory. This doesn't log anything to STDOUTPUT but starts a process which is running, however I can't see any tcp sockets listening on the port 10000. For more information on Beeline check out Starting Beeline in Standalone Embedded and Remote modes. We can run both batch and Interactive shell commands via CLI service which we will cover in the following sections. at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1300) Do new devs get fired if they can't solve a certain bug? Hive Shell is the default service for interacting with Hive, which is just Hive Shell. It is well-maintained. Hive Command Line Options Usage Examples Execute query using hive command line options $ hive -e 'select * from test'; Execute query using hive command line options in silent mode $ hive -S -e 'select * from test' Dump data to the file in silent mode $hive -S -e 'select col from tab1' > a.txt Read: There is a lot of. It is not part of the data itself but is derived from the partition that a particular dataset is loaded into. Method 3: Using Command Prompt (CMD) In the windows search menu > search the "cmd" > and click on the "CMD" app to open it: Once the CMD is opened, access the Postgres bin directory: cd \Program Files\PostgreSQL\ 15 \bin. start | Microsoft Learn at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:305) Hive is commonly used in production Linux and Windows environment. Go to the command line of the Hive server and start hiveserver2 docker exec -it hive-server bash hiveserver2 Maybe a little check that something is listening on port 10000 now netstat -anp | grep 10000 tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 446/java Okay. . Start the DataNode on New Node Datanode daemon should be started manually using $HADOOP_HOME/bin/hadoop-daemon.sh script. Hive Delete and Update Records Using ACID Transactions. window.ezoSTPixelAdd(slotId, 'adsensetype', 1); Strings. We will go over Hive CLI Commands, Hive Command Line Interface, and Hives default service in this post. at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073) What is the point of Thrower's Bandolier? A Hive Shell command line interface (CLI) allows us to run both batch and interactive shell commands simultaneously. container.style.maxWidth = container.style.minWidth + 'px'; You dont have to be a fan of the game to get a chance to play it and be a part of this fun. Simple commands. Python with Apache Spark using . Site design / logo 2023 Stack Exchange Inc; user contributions licensed under CC BY-SA. Unfortunately there is no 'metastore' database, I get an error while executing the second querry. He is knowledgeable and experienced, and he enjoys sharing his knowledge with others. Find the "company" database in the list: The step named "Hive Check execute" failed. To start Beeline in embedded mode and connect to Hive using a connection string !connect jdbc:hive2://, By running this command it prompts for user name and password. Once that is running, you can start the hive server. To build the current Hive code from the master branch: Here, {version} refers to the current Hive version. Audit logs were added in Hive 0.7for secure client connections(, ) and in Hive 0.10 for non-secure connections (, In order to obtain the performance metrics via the PerfLogger, you need to set DEBUG level logging for the PerfLogger class (. Senior Hadoop developer with 4 years of experience in designing and architecture solutions for the Big Data domain and has been involved with several complex engagements. How to Install Apache Hive on Ubuntu {Step-by-Step Guide} Browse other questions tagged, Where developers & technologists share private knowledge with coworkers, Reach developers & technologists worldwide, How Intuit democratizes AI development across teams through reusability. From the 'Class Name' input box select the Hive driver for working with HiveServer2: org.apache.hive.jdbc.HiveDriver. at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:4101) More about RegexSerDe can be found here in HIVE-662 and HIVE-1719. Select 'Aliases -> Add Alias.' to create a connection alias to your HiveServer2 instance. How to Connect to Hive Using Beeline - Spark By {Examples} The Hive -e command is used to run the hive query in batch mode.Instead of enter into the Hive CLI and execute the query,We can directly execute the queries using Hive -e option from the command line itself. beeline is located at $HIVE_HOME/bin directory. If you are running the metastore in Remote mode, you must start the Hive metastore before you The Hive website contains additional information on games, features, and updates. If Java is not currently installed in your system, please install it using the steps below. When you need help with a specific command, use the command line hive -H. Bedrock Edition features the Hive as one of its Featured Server. Not the answer you're looking for? Quick start of a web project - Themes Hive Hive distribution comes with hiveserver2 which is located at $HIVE_HOME/bin/ directory, run this command without any arguments to start the HiveServer2. Start a Discussion and get immediate answers you are looking for User Groups. selfupgrade - updating Hive OS through the console, the same as clicking a button in the web interface. Conversely, local mode only runs with one reducer and can be very slow processing larger data sets. HiveServer2 supports a command shell Beeline CLI that works with HiveServer2. Go to Hive shell by giving the command sudo hive and enter the command 'create database<data base name>' to create the new database in the Hive. Hive uses log4j for logging. 1. Running HiveServer2 and Beeline Starting from Hive 2.1, we need to run the schematool command below as an initialization step. Hive compiler generates map-reduce jobs for most queries. Server commands (server-command) | PaperCut beeline is located at $HIVE_HOME/bin directory. Start Hive Thrift server: Start hive thrift server with below command and running service process can be verified with $ jps -lm command. Hive Beeline must use the port specified in Hive JDBC. Let's connect to hiveserver2 now. Apache Hives archive is named Apache-hive-0.0.4-bin.tar.gz, and we have it here. Audit logs are logged from the Hive metastore server for every metastore API invocation. As a result, the operation is almost instantaneous. 01-24-2021 I can't find any db name in my hive-site.xml . The below given is the spark-shell command: spark-shell --conf spark.sql.hive.thriftServer.singleSession=true By copying and pasting this above code will make you registered with your data The second one is by using Spark submit: Do a bundle of the above command line and make or create a file - jar out of that. For more information on Beeline check out Starting Beeline in Standalone Embedded and Remote modes. "ERROR 1046 (3D000): No database selected". 48 more. Set. The name of the log entry is "HiveMetaStore.audit". Not the answer you're looking for? I fix this issue using. Click Shut down or sign out, press and hold the SHIFT key and click Restart. You can also start Hive server HS2 (HiveServer2) using hive --service command.if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'sparkbyexamples_com-box-4','ezslot_12',139,'0','0'])};__ez_fad_position('div-gpt-ad-sparkbyexamples_com-box-4-0');if(typeof ez_ad_units != 'undefined'){ez_ad_units.push([[250,250],'sparkbyexamples_com-box-4','ezslot_13',139,'0','1'])};__ez_fad_position('div-gpt-ad-sparkbyexamples_com-box-4-0_1'); .box-4-multi-139{border:none !important;display:block !important;float:none !important;line-height:0px;margin-bottom:15px !important;margin-left:auto !important;margin-right:auto !important;margin-top:15px !important;max-width:100% !important;min-height:250px;min-width:250px;padding:0;text-align:center !important;}. Using it on Windows would require slightly different steps. Karmasphere (http://karmasphere.com ) (commercial product). Note that in all the examples that follow, INSERT (into a Hive table, local directory or HDFS directory) is optional. Use a PostgreSQL database as the Hive external metastore on Amazon EMR In Order to run it as a service run the same command as nohup $HIVE_HOME/bin/hiveserver2 &. Now we can do some complex data analysis on the table u_data: Note that if you're using Hive 0.5.0 or earlier you will need to use COUNT(1) in place of COUNT(*). I uninstalled the mariadb maridb-server packages 2. Note that for versions of Hive which don't include HIVE-287, you'll need to use COUNT(1) in place of COUNT(*). Hive Metastore Configuration - Hadoop Online Tutorials Follow these steps to start different components of Hive on a node: Run Hive CLI: $HIVE_HOME/bin/hive Run HiveServer2 and Beeline: $HIVE_HOME/bin/hiveserver2 $HIVE_HOME/bin/beeline -u jdbc:Hive2://$HiveServer2_HOST:$HiveServer2_PORT Run HCatalog and start up the HCatalog server: $HIVE_HOME/hcatalog/sbin/hcat_server.sh Run the HCatalog CLI: How to restart HiveServer2 from the command line - Cloudera Use the following command to start the server: serverstart serverName where serverNameis the name of the server. The Web UI is available at port 10002 (127.0.0.1:10002) by default. Start the Hive command-line interface using the following commands: cd $HIVE_HOME/bin hive You are now able to issue SQL-like commands and directly interact with HDFS. Conditional statement. For a complete list of trademarks, click here. Several of the most memorable games from the past are being re-released, and the team has been hard at work on a project to do so. Starting, Stopping, and Using HiveServer2 in CDH - Cloudera $ $HIVE_HOME/bin/schematool -dbType <db type> -initSchema HiveServer2 (introduced in Hive 0.11) has its own CLI called Beeline. Planning a New Cloudera Enterprise Deployment, Step 1: Run the Cloudera Manager Installer, Migrating Embedded PostgreSQL Database to External PostgreSQL Database, Storage Space Planning for Cloudera Manager, Manually Install Cloudera Software Packages, Creating a CDH Cluster Using a Cloudera Manager Template, Step 5: Set up the Cloudera Manager Database, Installing Cloudera Navigator Key Trustee Server, Installing Navigator HSM KMS Backed by Thales HSM, Installing Navigator HSM KMS Backed by Luna HSM, Uninstalling a CDH Component From a Single Host, Starting, Stopping, and Restarting the Cloudera Manager Server, Configuring Cloudera Manager Server Ports, Moving the Cloudera Manager Server to a New Host, Migrating from PostgreSQL Database Server to MySQL/Oracle Database Server, Starting, Stopping, and Restarting Cloudera Manager Agents, Sending Usage and Diagnostic Data to Cloudera, Exporting and Importing Cloudera Manager Configuration, Modifying Configuration Properties Using Cloudera Manager, Viewing and Reverting Configuration Changes, Cloudera Manager Configuration Properties Reference, Starting, Stopping, Refreshing, and Restarting a Cluster, Virtual Private Clusters and Cloudera SDX, Compatibility Considerations for Virtual Private Clusters, Tutorial: Using Impala, Hive and Hue with Virtual Private Clusters, Networking Considerations for Virtual Private Clusters, Backing Up and Restoring NameNode Metadata, Configuring Storage Directories for DataNodes, Configuring Storage Balancing for DataNodes, Preventing Inadvertent Deletion of Directories, Configuring Centralized Cache Management in HDFS, Configuring Heterogeneous Storage in HDFS, Enabling Hue Applications Using Cloudera Manager, Post-Installation Configuration for Impala, Configuring Services to Use the GPL Extras Parcel, Tuning and Troubleshooting Host Decommissioning, Comparing Configurations for a Service Between Clusters, Starting, Stopping, and Restarting Services, Introduction to Cloudera Manager Monitoring, Viewing Charts for Cluster, Service, Role, and Host Instances, Viewing and Filtering MapReduce Activities, Viewing the Jobs in a Pig, Oozie, or Hive Activity, Viewing Activity Details in a Report Format, Viewing the Distribution of Task Attempts, Downloading HDFS Directory Access Permission Reports, Troubleshooting Cluster Configuration and Operation, Authentication Server Load Balancer Health Tests, Impala Llama ApplicationMaster Health Tests, Navigator Luna KMS Metastore Health Tests, Navigator Thales KMS Metastore Health Tests, Authentication Server Load Balancer Metrics, HBase RegionServer Replication Peer Metrics, Navigator HSM KMS backed by SafeNet Luna HSM Metrics, Navigator HSM KMS backed by Thales HSM Metrics, Choosing and Configuring Data Compression, YARN (MRv2) and MapReduce (MRv1) Schedulers, Enabling and Disabling Fair Scheduler Preemption, Creating a Custom Cluster Utilization Report, Configuring Other CDH Components to Use HDFS HA, Administering an HDFS High Availability Cluster, Changing a Nameservice Name for Highly Available HDFS Using Cloudera Manager, MapReduce (MRv1) and YARN (MRv2) High Availability, YARN (MRv2) ResourceManager High Availability, Work Preserving Recovery for YARN Components, MapReduce (MRv1) JobTracker High Availability, Cloudera Navigator Key Trustee Server High Availability, Enabling Key Trustee KMS High Availability, Enabling Navigator HSM KMS High Availability, High Availability for Other CDH Components, Navigator Data Management in a High Availability Environment, Configuring Cloudera Manager for High Availability With a Load Balancer, Introduction to Cloudera Manager Deployment Architecture, Prerequisites for Setting up Cloudera Manager High Availability, High-Level Steps to Configure Cloudera Manager High Availability, Step 1: Setting Up Hosts and the Load Balancer, Step 2: Installing and Configuring Cloudera Manager Server for High Availability, Step 3: Installing and Configuring Cloudera Management Service for High Availability, Step 4: Automating Failover with Corosync and Pacemaker, TLS and Kerberos Configuration for Cloudera Manager High Availability, Port Requirements for Backup and Disaster Recovery, Monitoring the Performance of HDFS Replications, Monitoring the Performance of Hive/Impala Replications, Enabling Replication Between Clusters with Kerberos Authentication, How To Back Up and Restore Apache Hive Data Using Cloudera Enterprise BDR, How To Back Up and Restore HDFS Data Using Cloudera Enterprise BDR, Migrating Data between Clusters Using distcp, Copying Data between a Secure and an Insecure Cluster using DistCp and WebHDFS, Using S3 Credentials with YARN, MapReduce, or Spark, How to Configure a MapReduce Job to Access S3 with an HDFS Credstore, Importing Data into Amazon S3 Using Sqoop, Configuring ADLS Access Using Cloudera Manager, Importing Data into Microsoft Azure Data Lake Store Using Sqoop, Configuring Google Cloud Storage Connectivity, How To Create a Multitenant Enterprise Data Hub, Configuring Authentication in Cloudera Manager, Configuring External Authentication and Authorization for Cloudera Manager, Step 2: Install JCE Policy Files for AES-256 Encryption, Step 3: Create the Kerberos Principal for Cloudera Manager Server, Step 4: Enabling Kerberos Using the Wizard, Step 6: Get or Create a Kerberos Principal for Each User Account, Step 7: Prepare the Cluster for Each User, Step 8: Verify that Kerberos Security is Working, Step 9: (Optional) Enable Authentication for HTTP Web Consoles for Hadoop Roles, Kerberos Authentication for Non-Default Users, Managing Kerberos Credentials Using Cloudera Manager, Using a Custom Kerberos Keytab Retrieval Script, Using Auth-to-Local Rules to Isolate Cluster Users, Configuring Authentication for Cloudera Navigator, Cloudera Navigator and External Authentication, Configuring Cloudera Navigator for Active Directory, Configuring Groups for Cloudera Navigator, Configuring Authentication for Other Components, Configuring Kerberos for Flume Thrift Source and Sink Using Cloudera Manager, Using Substitution Variables with Flume for Kerberos Artifacts, Configuring Kerberos Authentication for HBase, Configuring the HBase Client TGT Renewal Period, Using Hive to Run Queries on a Secure HBase Server, Enable Hue to Use Kerberos for Authentication, Enabling Kerberos Authentication for Impala, Using Multiple Authentication Methods with Impala, Configuring Impala Delegation for Hue and BI Tools, Configuring a Dedicated MIT KDC for Cross-Realm Trust, Integrating MIT Kerberos and Active Directory, Hadoop Users (user:group) and Kerberos Principals, Mapping Kerberos Principals to Short Names, Configuring TLS Encryption for Cloudera Manager and CDH Using Auto-TLS, Manually Configuring TLS Encryption for Cloudera Manager, Manually Configuring TLS Encryption on the Agent Listening Port, Manually Configuring TLS/SSL Encryption for CDH Services, Configuring TLS/SSL for HDFS, YARN and MapReduce, Configuring Encrypted Communication Between HiveServer2 and Client Drivers, Configuring TLS/SSL for Navigator Audit Server, Configuring TLS/SSL for Navigator Metadata Server, Configuring TLS/SSL for Kafka (Navigator Event Broker), Configuring Encrypted Transport for HBase, Data at Rest Encryption Reference Architecture, Resource Planning for Data at Rest Encryption, Optimizing Performance for HDFS Transparent Encryption, Enabling HDFS Encryption Using the Wizard, Configuring the Key Management Server (KMS), Configuring KMS Access Control Lists (ACLs), Migrating from a Key Trustee KMS to an HSM KMS, Migrating Keys from a Java KeyStore to Cloudera Navigator Key Trustee Server, Migrating a Key Trustee KMS Server Role Instance to a New Host, Configuring CDH Services for HDFS Encryption, Backing Up and Restoring Key Trustee Server and Clients, Initializing Standalone Key Trustee Server, Configuring a Mail Transfer Agent for Key Trustee Server, Verifying Cloudera Navigator Key Trustee Server Operations, Managing Key Trustee Server Organizations, HSM-Specific Setup for Cloudera Navigator Key HSM, Integrating Key HSM with Key Trustee Server, Registering Cloudera Navigator Encrypt with Key Trustee Server, Preparing for Encryption Using Cloudera Navigator Encrypt, Encrypting and Decrypting Data Using Cloudera Navigator Encrypt, Converting from Device Names to UUIDs for Encrypted Devices, Configuring Encrypted On-disk File Channels for Flume, Installation Considerations for Impala Security, Add Root and Intermediate CAs to Truststore for TLS/SSL, Authenticate Kerberos Principals Using Java, Configure Antivirus Software on CDH Hosts, Configure Browser-based Interfaces to Require Authentication (SPNEGO), Configure Browsers for Kerberos Authentication (SPNEGO), Configure Cluster to Use Kerberos Authentication, Convert DER, JKS, PEM Files for TLS/SSL Artifacts, Obtain and Deploy Keys and Certificates for TLS/SSL, Set Up a Gateway Host to Restrict Access to the Cluster, Set Up Access to Cloudera EDH or Altus Director (Microsoft Azure Marketplace), Using Audit Events to Understand Cluster Activity, Configuring Cloudera Navigator to work with Hue HA, Cloudera Navigator support for Virtual Private Clusters, Encryption (TLS/SSL) and Cloudera Navigator, Limiting Sensitive Data in Navigator Logs, Preventing Concurrent Logins from the Same User, Enabling Audit and Log Collection for Services, Monitoring Navigator Audit Service Health, Configuring the Server for Policy Messages, Using Cloudera Navigator with Altus Clusters, Configuring Extraction for Altus Clusters on AWS, Applying Metadata to HDFS and Hive Entities using the API, Using the Purge APIs for Metadata Maintenance Tasks, Troubleshooting Navigator Data Management, Files Installed by the Flume RPM and Debian Packages, Configuring the Storage Policy for the Write-Ahead Log (WAL), Using the HBCK2 Tool to Remediate HBase Clusters, Exposing HBase Metrics to a Ganglia Server, Configuration Change on Hosts Used with HCatalog, Accessing Table Information with the HCatalog Command-line API, Unable to connect to database with provided credential, Unknown Attribute Name exception while enabling SAML, Downloading query results from Hue takes long time, 502 Proxy Error while accessing Hue from the Load Balancer, Hue Load Balancer does not start after enabling TLS, Unable to kill Hive queries from Job Browser, Unable to connect Oracle database to Hue using SCAN, Increasing the maximum number of processes for Oracle database, Unable to authenticate to Hbase when using Hue, ARRAY Complex Type (CDH 5.5 or higher only), MAP Complex Type (CDH 5.5 or higher only), STRUCT Complex Type (CDH 5.5 or higher only), VARIANCE, VARIANCE_SAMP, VARIANCE_POP, VAR_SAMP, VAR_POP, Configuring Resource Pools and Admission Control, Managing Topics across Multiple Kafka Clusters, Setting up an End-to-End Data Streaming Pipeline, Kafka Security Hardening with Zookeeper ACLs, Configuring an External Database for Oozie, Configuring Oozie to Enable MapReduce Jobs To Read/Write from Amazon S3, Configuring Oozie to Enable MapReduce Jobs To Read/Write from Microsoft Azure (ADLS), Starting, Stopping, and Accessing the Oozie Server, Adding the Oozie Service Using Cloudera Manager, Configuring Oozie Data Purge Settings Using Cloudera Manager, Dumping and Loading an Oozie Database Using Cloudera Manager, Adding Schema to Oozie Using Cloudera Manager, Enabling the Oozie Web Console on Managed Clusters, Scheduling in Oozie Using Cron-like Syntax, Installing Apache Phoenix using Cloudera Manager, Using Apache Phoenix to Store and Access Data, Orchestrating SQL and APIs with Apache Phoenix, Creating and Using User-Defined Functions (UDFs) in Phoenix, Mapping Phoenix Schemas to HBase Namespaces, Associating Tables of a Schema to a Namespace, Understanding Apache Phoenix-Spark Connector, Understanding Apache Phoenix-Hive Connector, Using MapReduce Batch Indexing to Index Sample Tweets, Near Real Time (NRT) Indexing Tweets Using Flume, Using Search through a Proxy for High Availability, Enable Kerberos Authentication in Cloudera Search, Flume MorphlineSolrSink Configuration Options, Flume MorphlineInterceptor Configuration Options, Flume Solr UUIDInterceptor Configuration Options, Flume Solr BlobHandler Configuration Options, Flume Solr BlobDeserializer Configuration Options, Solr Query Returns no Documents when Executed with a Non-Privileged User, Installing and Upgrading the Sentry Service, Configuring Sentry Authorization for Cloudera Search, Synchronizing HDFS ACLs and Sentry Permissions, Authorization Privilege Model for Hive and Impala, Authorization Privilege Model for Cloudera Search, Frequently Asked Questions about Apache Spark in CDH, Developing and Running a Spark WordCount Application, Accessing Data Stored in Amazon S3 through Spark, Accessing Data Stored in Azure Data Lake Store (ADLS) through Spark, Accessing Avro Data Files From Spark SQL Applications, Accessing Parquet Files From Spark SQL Applications, Building and Running a Crunch Application with Spark.