Remote Executor
The Remote Executor component is only relevant if you use Apache Knox. |
Ataccama Remote Executor is a REST API service for ONE jobs that is used for starting remote jobs from Runtime Server, ONE Desktop, and ONE. This article describes how to enable and configure the Remote Executor component on ONE Runtime Server.
Installation and configuration
-
Install ONE Runtime Server on the edge node of your Hadoop or Spark cluster in order to use the executor to run Spark jobs. This adds Remote Executor Web Page to the ONE Runtime Server Admin and enables application admins to monitor the remote job executions (such as logs, state, queues).
-
Enable and configure the Remote Executor component in the Server Configuration file:
-
Create
client.runtimeConfig
andclient.serverConfig
fromdefault.runtimeConfig
anddefault.serverConfig
in/opt/ataccama/runtime/server/etc
:Create server and runtime configuration filescp default.runtimeConfig globe.runtimeConfig cp default.serverConfig globe.serverConfig
-
Modify the newly created runtime configuration file:
<!-- Add the Keycloak deployment contributor to contributedConfigs. Started at line 7 after <contributedConfigs> --> <config class="com.ataccama.server.keycloak.KeycloakDeploymentContributor"> <keycloakConfigs> <keycloakConfig name="keycloak-local"> <!-- Define common parameters for all clients. They can be overridden by client-specific settings.--> <url>https://keycloak-url:8443/auth</url> <realm>ataccamaone</realm> <attributes> <attribute name="ssl-required" value="external"/> </attributes> <clients> <client id="one-admin-center-re"> <secret>USE-HERE-SECRET-FROM-KEYCLOAK-CLIENT</secret> <attributes> <!-- Define client-specific settings.--> <attribute name="use-resource-role-mappings" value="false"/> <attribute name="public-client" value="false"/> <attribute name="bearer-only" value="false"/> <attribute name="autodetect-bearer-only" value="false"/> <attribute name="always-refresh-token" value="false"/> <attribute name="principal-attribute" value="preferred_username"/> </attributes> </client> </clients> </keycloakConfig> </keycloakConfigs> </config>
-
Modify the newly created server configuration file:
<!-- On the line 25, change the name from default.runtimeConfig to company.runtimeConfig --> <runtimeConfiguration>client.runtimeConfig</runtimeConfiguration> <!-- Find the HttpDispatcher component and add a listener with Keycloak instead of the existing listener, as shown in the following example: --> <component class="com.ataccama.dqc.web.HttpDispatcher"> <filters/> <listeners> <!-- Use this line when you are ready to apply SSL --> <!--listenerBean healthstateRefreshRate="60000" maxWaitingRequests="10" backlog="50" waitingRequestsWarningThreshold="1" port="8443" threadPoolTimeout="10000" healthStateRecoveryTimeout="300000" name="default" threads="10" servletOnly="false" ssl="true" keyStoreFile="/opt/ataccama/ssl/dqtoolrenp.jks" keyStorePassword="keystore-password"--> <listenerBean healthstateRefreshRate="60000" maxWaitingRequests="10" backlog="50" waitingRequestsWarningThreshold="1" port="8888" threadPoolTimeout="10000" healthStateRecoveryTimeout="300000" name="default" threads="10" servletOnly="false" ssl="false"> <contexts> <listenerContext path="/"> <filterChains> <filterChain path="/*" filters="securityFilter"> <conditions/> </filterChain> </filterChains> <securityFilter loginUrl="/sso/login" class="com.ataccama.server.http.security.keycloak.KeycloakSecurity"> <identityProviders> <identityProvider configName="keycloak-local" clientId="one-admin-center-re" pattern="/**"/> </identityProviders> <interceptUrls> <interceptUrl access="isAuthenticated()" pattern="/console/**"/> <interceptUrl access="isAuthenticated()" pattern="/executor/**"/> <interceptUrl access="isAuthenticated()" pattern="/api/dqd/**"/> </interceptUrls> </securityFilter> <!-- If you are using 14.5.1 or later, the securityFilter element is defined as follows. For more information, see the HTTP Dispatcher documentation. <securityFilter class="com.ataccama.server.http.security.keycloak.KeycloakSecurity"> <identityProvider configName="localKeycloak" clientId="one-admin-center" roleMapping="realm_access.roles"/> <interceptUrls> <interceptUrl access="permitAll" pattern="/welcome/"/> <interceptUrl access="hasRole('USER')" pattern="/licenses/"/> <interceptUrl access="isAuthenticated()" pattern="/**"/> </interceptUrls> <!-- <usePlatformDeployment>rdm</usePlatformDeployment> --> </securityFilter> ---> </listenerContext> </contexts> </listenerBean> </listeners> <servletFilters/> </component> <!-- Note that the configName 'configName="keycloak-local"' here, and '<keycloakConfig name="keycloak-local">' in the runtime configuration should have the same name. Next, go to the Remote Executor component and uncomment the Remote Executor component. It should look like this: --> <component localRootFolder="/tmp/ataccama_executor" propertiesFile="../executor/executor.properties" enableEditProperties="false" prefix="/executor" disabled="false" maxRunningJobs="5" class="com.ataccama.dqc.executor.RemoteExecutorComponent"/> <!-- After the Remote Executor component, find the Remote Access component. Uncomment it and change as follows: --> <component impersonate="false" prefix="/metadata" disabled="false" class="com.ataccama.server.component.hadoop.RemoteAccessComponent"> <services> <!--iHadoopService prefix="/hcatalog" disabled="false" class="com.ataccama.server.component.hadoop.hcatalog.HCatalogService" cluster="hadoop"/>--> <iHadoopService prefix="/hcatalogJdbc" disabled="false" class="com.ataccama.server.component.hadoop.hcatalog.jdbc.HCatalogJdbcService" dataSourceName="database" cluster="hadoop" /> <!--<iHadoopService prefix="/jdbc" disabled="false" class="com.ataccama.server.component.hadoop.hcatalog.jdbc.HCatalogJdbcService" dataSourceName="database"/--> </services> </component>
-
Execute the following scripts in`/opt/ataccama/runtime/server/executor` folder:
-
./copy_client_conf.sh
- Copies the Hadoop site XML file into the/opt/ataccama/runtime/server/executor/client_conf
folder. -
./load_default_spark_properties.sh
- Copies the Spark properties into the/opt/ataccama/runtime/server/executor/spark.properties
file. New properties are added to the top of the file. After the script has been executed, openspark.properties
and make the following changes:## Spark properties might be stored in a different format. ## In case there are white spaces before the value, replace the space by an equal sign (=) for the next three properties: spark.sql.hive.metastore.jars=${env:HADOOP_COMMON_HOME}/../hive/lib/*:${env:HADOOP_COMMON_HOME}/client/* spark.eventLog.dir=hdfs:/<path>/spark2ApplicationHistory spark.yarn.historyServer.address=http://<url>:18089
-
-
Add the following line to the
/opt/ataccama/runtime/server/executor/executor.properties
file to enable Hadoop properties:## Uncomment the following line: hadoop.properties=hadoop.properties
-
Configure Kerberos details using the following lines in
/opt/ataccama/runtime/server/executor/hadoop.properties
:kerberos.conf=/etc/krb5.conf kerberos.principal=anme@domain kerberos.keytab=/opt/ataccama/runtime/server/executor/keytab/filename.keytab
-
Update the start and stop scripts to use the new configuration files:
-
Update
/app/ataccama/runtime/server/start.sh
to use the new configuration fileclient.serverConfig
:COMMAND="./bin/onlinectl.sh -config ./server/etc/client.serverConfig start"
-
Update
/app/ataccama/runtime/server/stop.sh
to use the new configuration fileclient.serverConfig
:./bin/onlinectl.sh -config ./server/etc/client.serverConfig stop
-
Start ONE Runtime Server
To start ONE Runtime Server, run the ./start.sh
script.
If you want to run the process in the background, use the silent mode:
cd /opt/app/ataccama/runtime/server
# Start runtime in the silent mode
./start.sh -s
# Check logs
tail -f /opt/app/ataccama/runtime/server/logs/server.out
# To stop the server, run the stop script
./stop.sh
Was this page useful?