Livy오픈소스를 활용한 Spark impersonation
1. Hadoop core-site.xml
<property>
<name>hadoop.proxyuser.centos{계정명}.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.centos{계정명}.groups</name>
<value>*</value>
</property>
b. ambari의 경우 custom core-site로 추가
2. Livy configuration
livy.impersonation.enabled =
true
livy.server.csrf_protection.enabled=
false
(true로 두고 Post로 실행할 경우, Missing Required Header for CSRF protection에러나면 Headers에 추가 X-Requested-By = ambari )
b. ambari의 경우 Spark > Advanced livy-conf 변경
livy.environment = production
livy.impersonation.enabled = true
livy.server.csrf_protection_enabled = false
3. Rest API - Test
a. POST - http://localhost:8998/sessions
RequestBody {
"kind"
:
"spark"
,
"proxyUser"
:
"john"
}
b.POST - http://localhost:8998/sessions/{sessionId}/statements
RequestBody
{
"code"
:
"var readMe = sc.textFile(\"/user/john/input-data/sample.csv\"); readMe.take(5);"
}
c. GET - htpp://localhost:8998/sessions/{sessionId}/statements
'BigData' 카테고리의 다른 글
Apache hive - transaction (0) | 2017.09.26 |
---|---|
Hadoop Security for Multi tenant #4 (0) | 2017.04.03 |
Hadoop Securiy for Multi tenant #2 (0) | 2017.01.20 |
Hadoop Securiy for Multi tenant #1 (0) | 2017.01.17 |
Flume-Kafka-Elasticsearch 테스트 (0) | 2016.03.14 |