You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
kubectl get pods show that all pods are running and ready.
NAME READY STATUS RESTARTS AGE
hdfs-httpfs-5686fd75df-2pgk7 1/1 Running 0 59m
hdfs-namenode-0 2/2 Running 1 59m
hdfs-datanode-0 1/1 Running 0 59m
hdfs-datanode-1 1/1 Running 0 58m
I use port forward to access to the K8S HDFS from my local machine :
# namenode web UI
kubectl port-forward svc/hdfs-namenode 50070:50070
# hdfs port
kubectl port-forward hdfs-namenode-0 8020:8020
On my local machine, I have just unzipped a hadoop 2 distribution (2.10.0) and updated core-site.xml like this, to use the forwarded port :
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:8020</value>
<description>The name of the default file system. Either the
literal string "local" or a host:port for NDFS.
</description>
<final>true</final>
</property>
# ok :
hdfs dfs -mkdir /jars
# not ok :
hdfs dfs -put helloSpark.jar /jars
20/12/11 09:50:53 INFO hdfs.DataStreamer: Exception in createBlockOutputStream
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:532)
at org.apache.hadoop.hdfs.DataStreamer.createSocketForPipeline(DataStreamer.java:259)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1699)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
20/12/11 09:50:53 WARN hdfs.DataStreamer: Abandoning BP-831521929-10.42.1.6-1607678530556:blk_1073741827_1003
20/12/11 09:50:53 WARN hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[10.42.1.5:50010,DS-4153f502-30da-42d7-a415-69601658066a,DISK]
I don't have any error in the datanodes logs, and in the namenode , the only error is the same as above.
Did I miss something in the configuration?
Thanks :)
The text was updated successfully, but these errors were encountered:
I am trying to copy a local file on my HDFS, deployed with the helm chart.
I am doing :
helm install hdfs https://github.com/Gradiant/charts/releases/download/hdfs-0.1.0/hdfs-0.1.0.tgz -f hdfs-values.yaml
My hdfs-values file :
kubectl get pods
show that all pods are running and ready.I use port forward to access to the K8S HDFS from my local machine :
On my local machine, I have just unzipped a hadoop 2 distribution (2.10.0) and updated core-site.xml like this, to use the forwarded port :
I don't have any error in the datanodes logs, and in the namenode , the only error is the same as above.
Did I miss something in the configuration?
Thanks :)
The text was updated successfully, but these errors were encountered: