Help getting the fix for HADOOP-13617 committed

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Help getting the fix for HADOOP-13617 committed

Jelmer Kuperus
Hi, I am hoping someone here would be able to guide me through the process of
getting the fix for HADOOP-13617 committed

We started using hadoop's openstack filesystem (Swift) as a storage backend
for Apache Flink and ran into the problem that every few hours an error
would be triggered that would cause our flink jobs to get restarted.

The problem happens when the token that is used to make api calls expires.
At that point the server will return a 401 status code (unauthorized).

What should happen next is that the token should be refreshed and the
request should be retried with the new token.

But what happens is that while a new token is requested , the request is
retried with the old token and will always fail.

The fix is a simple one, set the correct authentication header on the
request before retrying

It turns out that this issue had already been reported in 2016 and a patch
was provided by the original reporter.

However for some reason the patch was never merged.

Because the code has changed a bit since the patch was created, i took the
liberty of adapting it so it compiles cleanly on the current trunk

https://github.com/apache/hadoop/pull/361

Is there anyone that could help me out to make sure this one line fix gets
merged.  Its a small change but a very important one if you are going to use
this filesystem for long running tasks such as streaming flink tasks



--
Sent from: http://hadoop-common.472056.n3.nabble.com/Developers-f17302.html

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]