[jira] [Created] (HADOOP-16217) spark use Path.getFileSystem(conf) to get FS to write data to hdfs, But Permission denied error occourd

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] [Created] (HADOOP-16217) spark use Path.getFileSystem(conf) to get FS to write data to hdfs, But Permission denied error occourd

JIRA jira@apache.org
huanghuai created HADOOP-16217:
----------------------------------

             Summary: spark use Path.getFileSystem(conf) to get FS to write data to hdfs, But Permission denied error occourd
                 Key: HADOOP-16217
                 URL: https://issues.apache.org/jira/browse/HADOOP-16217
             Project: Hadoop Common
          Issue Type: Improvement
          Components: auth
    Affects Versions: 2.7.4
            Reporter: huanghuai


spark = SparkSession.builder().master("local[2]")
 .appName("test")
 .config("spark.default.parallelism",1)
 .getOrCreate();

Dataset<Row> ds = spark.read().csv("file:///d:/test.csv");

 

ds.write().mode(SaveMode.Overwrite).option("header", "true")
 .csv("hdfs://10.10.202.26:9000/testfloder");

*{color:#FF0000}-------------------------- above is code --------------------------------{color}*

question:

 current window user name is "admin"

 "testfloder" in hdfs's create user is root and Permission is drwxr-xr-x

 I want to use "root" to write data to hdfs but I couldn't find out where to set this option.

 

if i use  FileSystem.get(conf,"root")  to write something , It's ok.

 

following is create fs's stack :

   Path.getFileSystem --> FileSystem.get(.., conf)-->CACHE.get(uri, conf)-->

new Key(uri, conf)

–>UserGroupInformation.getCurrentUser()–>getLoginUser() --> ...

-->invokePriv(COMMIT_METHOD)–>NTLoginModule#login–>new 

NTSystem(debug) -->NTSystem#getCurrent  

 

NTSystem#getCurrent   is native method i can't modify.

 

how should i do? 

System.setProperty("HADOOP_USER_NAME", "root");

System.setProperty("user.name", "root");

 

is no use.

 

 

 

 

org.apache.hadoop.security.AccessControlException: Permission denied: user=admin, access=WRITE, inode="/testfloder/_temporary/0":root:supergroup:drwxr-xr-x
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
 at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
 at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)
 at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712)
 at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695)
 at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
 at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
 at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
 at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3007)
 at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2975)
 at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
 at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
 at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
 at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
 at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1881)
 at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:313)
 at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupJob(HadoopMapReduceCommitProtocol.scala:162)
 at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:139)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]