(warning) The Data Prep (Paxata) documentation is now available on the DataRobot public documentation site. See the Data Prep section for user documentation and connector information. After the 2021.2 SP1 release, the content on this site will be removed and replaced with a link to the DataRobot public documentation site.

Cloudera CDH6 HDFS Connector Documentation

User Persona: Paxata Admin - Data Source Admin - IT/DevOps

Availability: Please note this Connector is not available to Paxata SaaS customers.

*Note: This document covers all configuration fields available during Connector setup. Some fields may have already been filled out by your Admin at an earlier step of configuration and may not be visible to you. For more information on Paxata’s Connector Framework, please see here.

Also: Your Admin may have named this Connector something else in the list of Data Sources.

Configuring Paxata

This connector allows you to connect to an HDFS cluster for imports and exports. The fields you are required to set up here depend on the authentication method you select—Simple or Kerberos. The type of authentication you select will apply to all Data Sources that you create based on a connector configuration.

Note: Configuring this Connector requires file system access on the Paxata Server and a core-site.xml with the Hadoop cluster configuration. Please reach out to your Customer Success representative for assistance with this step.

General

  • Name: Name of the data source as it will appear to users in the UI.
  • Description: Description of the data source as it will appear to users in the UI.

Something to consider: You may connect Paxata to multiple HDFS Clusters and having a descriptive name can be a big help to users in identifying the appropriate data source. 

Hadoop Cluster

  • Authentication Method: Choose between Simple or Kerberos. The type of authentication you select will apply to all Data Sources that you create based on a connector configuration. See Simple or Kerberos Configuration section below for more details depending on your selection.
  • Cluster Core Site XML Path: Fully qualified path of core-site.xml on webserver. Example: /path/to/core-site.xml
  • Cluster HDFS Site XML Path: Fully qualified path of hdfs-site.xml on webserver. Example: /path/to/hdfs-site.xml
  • Native Hadoop Library Path: Fully qualified path of native Hadoop libraries on webserver. Example: /path/to/libraries

Simple Configuration (only for Simple authentication)

  • Username: The application web server will connect to your HDFS cluster as the username you provide here.

Kerberos Configuration

The following parameters are required for Kerberos authentication.

  • Principal: Kerberos Principal.
  • Realm: Kerberos Realm.
  • KDC Hostname: Kerberos Key Distribution Center Hostname.
  • Kerberos Configuration File: Fully-qualified path of Kerberos configuration file on webserver.
  • Keytab File: Fully-qualified path of Kerberos Keytab File on webserver.
  • Use Application User: Check this box to read/write as the logged-in application user, or uncheck to use proxy user.
  • Proxy User: The proxy used to authenticate with the cluster. ${user.name} can be entered as the proxy user. ${user.name} works similar to selecting Use Application User but allows for more flexibility. For example:
    • To add a domain to the user’s credentials, enter \domain_name\${user.name} in the Proxy User field. Paxata will pass the username and the domain.
      • Example: \Accounts\${user.name} results in AccountsJoe (assuming Joe is the username).
    • To apply a text modifier to the username, add .modifier to the key ${user.name}. The acceptable modifiers are: toLower, toUpper, toLowerCase, toUpperCase, and trim.
      • For example ${user.name.toLowerCase} converts Joe into joe (assuming Joe is the username).

Configuration

  • Data Store Root Directory: The ’parent directory’ on your cluster where the Data Library will read from and write to for import and export operations. This also supports import and export for sub-directories of the root. 
  • Map INT96 to Datetime: Check to convert INT96 type fields to Datetime values on import.

Data Import Information

Import Via Browsing

  • Browse: 

    • Delimited datasets: comma, tab...

    • XML

    • JSON

    • Excel: Xls and XLSX

    • Avro

    • Parquet

    • Fixed format 

    • Browse to a file and select it for import

    • Supported data formats:

  • Wildcard:

    • Globbing is supported

Import Via SQL Query

Not supported

Export

  • Supported using one of the stream-based formats listed above (under Import Via Browse)