Neural Network in hadoop

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Neural Network in hadoop

unmesha sreeveni

I am trying to implement Neural Network in MapReduce. Apache mahout is reffering this paper

Neural Network (NN) We focus on backpropagation By defining a network structure (we use a three layer network with two output neurons classifying the data into two categories), each mapper propagates its set of data through the network. For each training example, the error is back propagated to calculate the partial gradient for each of the weights in the network. The reducer then sums the partial gradient from each mapper and does a batch gradient descent to update the weights of the network.

Here is the worked out example for gradient descent algorithm.

Gradient Descent Learning Algorithm for Sigmoidal Perceptrons

  1. Which is the better way to parallize neural network algorithm While looking in MapReduce perspective? In mapper: Each Record owns a partial weight(from above example: w0,w1,w2),I doubt if w0 is bias. A random weight will be assigned initially and initial record calculates the output(o) and weight get updated , second record also find the output and deltaW is got updated with the previous deltaW value. While coming into reducer the sum of gradient is calculated. ie if we have 3 mappers,we will be able to get 3 w0,w1,w2.These are summed and using batch gradient descent we will be updating the weights of the network.
  2. In the above method how can we ensure that which previous weight is taken while considering more than 1 map task.Each map task has its own weight updated.How can it be accurate? enter image description here
  3. Where can I find backward propogation in the above mentioned gradient descent neural network algorithm?Or is it fine with this implementation?
  4. what is the termination condition mensioned in the algorithm?

Please help me with some pointers.

Thanks in advance.


--
Thanks & Regards

Unmesha Sreeveni U.B
Hadoop, Bigdata Developer
Centre for Cyber Security | Amrita Vishwa Vidyapeetham


Reply | Threaded
Open this post in threaded view
|

Re: Neural Network in hadoop

Alpha Bagus Sunggono
In my opinion,
- This is just for 1 iteration. Then, batch gradient means find all delta, then updates all weight. So , I think its improperly if each have weight updated. Weight updated should be after Reduced.
- Backpropagation can be found after Reduced.
- This iteration should be repeat and repeat again. Termination condition should be measured by delta error of sigmoid output in the end of mapper. Iteration process can be terminated after we get suitable  small value enough of the delta error.


On Thu, Feb 12, 2015 at 5:14 PM, unmesha sreeveni <[hidden email]> wrote:

I am trying to implement Neural Network in MapReduce. Apache mahout is reffering this paper

Neural Network (NN) We focus on backpropagation By defining a network structure (we use a three layer network with two output neurons classifying the data into two categories), each mapper propagates its set of data through the network. For each training example, the error is back propagated to calculate the partial gradient for each of the weights in the network. The reducer then sums the partial gradient from each mapper and does a batch gradient descent to update the weights of the network.

Here is the worked out example for gradient descent algorithm.

Gradient Descent Learning Algorithm for Sigmoidal Perceptrons

  1. Which is the better way to parallize neural network algorithm While looking in MapReduce perspective? In mapper: Each Record owns a partial weight(from above example: w0,w1,w2),I doubt if w0 is bias. A random weight will be assigned initially and initial record calculates the output(o) and weight get updated , second record also find the output and deltaW is got updated with the previous deltaW value. While coming into reducer the sum of gradient is calculated. ie if we have 3 mappers,we will be able to get 3 w0,w1,w2.These are summed and using batch gradient descent we will be updating the weights of the network.
  2. In the above method how can we ensure that which previous weight is taken while considering more than 1 map task.Each map task has its own weight updated.How can it be accurate? enter image description here
  3. Where can I find backward propogation in the above mentioned gradient descent neural network algorithm?Or is it fine with this implementation?
  4. what is the termination condition mensioned in the algorithm?

Please help me with some pointers.

Thanks in advance.


--
Thanks & Regards

Unmesha Sreeveni U.B
Hadoop, Bigdata Developer
Centre for Cyber Security | Amrita Vishwa Vidyapeetham





--
Alpha Bagus Sunggono 
Reply | Threaded
Open this post in threaded view
|

Re: Neural Network in hadoop

Ted Dunning
In reply to this post by unmesha sreeveni

That is a really old paper that basically pre-dates all of the recent important work in neural networks.

You should look for works on Rectified Linear Units (ReLU), drop-out regularization, parameter servers (downpour sgd) and deep learning.

Map-reduce as you have used it will not produce interesting results because the overhead of map-reduce will be far too high.

Here are some references:







On Thu, Feb 12, 2015 at 2:14 AM, unmesha sreeveni <[hidden email]> wrote:
I am trying to implement Neural Network in MapReduce. Apache mahout is
reffering this paper
<http://www.cs.stanford.edu/people/ang/papers/nips06-mapreducemulticore.pdf>

Neural Network (NN) We focus on backpropagation By defining a network
structure (we use a three layer network with two output neurons classifying
the data into two categories), each mapper propagates its set of data
through the network. For each training example, the error is back
propagated to calculate the partial gradient for each of the weights in the
network. The reducer then sums the partial gradient from each mapper and
does a batch gradient descent to update the weights of the network.

Here <http://homepages.gold.ac.uk/nikolaev/311sperc.htm> is the worked out
example for gradient descent algorithm.

Gradient Descent Learning Algorithm for Sigmoidal Perceptrons
<http://pastebin.com/6gAQv5vb>

   1. Which is the better way to parallize neural network algorithm While
   looking in MapReduce perspective? In mapper: Each Record owns a partial
   weight(from above example: w0,w1,w2),I doubt if w0 is bias. A random weight
   will be assigned initially and initial record calculates the output(o) and
   weight get updated , second record also find the output and deltaW is got
   updated with the previous deltaW value. While coming into reducer the sum
   of gradient is calculated. ie if we have 3 mappers,we will be able to get 3
   w0,w1,w2.These are summed and using batch gradient descent we will be
   updating the weights of the network.
   2. In the above method how can we ensure that which previous weight is
   taken while considering more than 1 map task.Each map task has its own
   weight updated.How can it be accurate? [image: enter image description
   here]
   3. Where can I find backward propogation in the above mentioned gradient
   descent neural network algorithm?Or is it fine with this implementation?
   4. what is the termination condition mensioned in the algorithm?

Please help me with some pointers.

Thanks in advance.

--
*Thanks & Regards *


*Unmesha Sreeveni U.B*
*Hadoop, Bigdata Developer*
*Centre for Cyber Security | Amrita Vishwa Vidyapeetham*
http://www.unmeshasreeveni.blogspot.in/

Reply | Threaded
Open this post in threaded view
|

Re: Neural Network in hadoop

unmesha sreeveni
In reply to this post by Alpha Bagus Sunggono


On Thu, Feb 12, 2015 at 4:13 PM, Alpha Bagus Sunggono <[hidden email]> wrote:
In my opinion,
- This is just for 1 iteration. Then, batch gradient means find all delta, then updates all weight. So , I think its improperly if each have weight updated. Weight updated should be after Reduced.
- Backpropagation can be found after Reduced.
- This iteration should be repeat and repeat again.
​I doubt if iteration is for each record. ie say for example we have just 5 records,so whether the iteration will be 5 ? or some other concepts.
ie from the above example 
​​
w0,∆w1,∆w2
​ ​
​ 
will be the delta error
 
​.So here lets say we have a threshold value 
​. so for each record we will be checking if 
w0,∆w1,∆w2
​ 
 is 
​less
 than
​ or equal to ​
 
​threshold value , else continue the iteration. Is it like that . Am I wrong ?

​Sorry I am not that much clear on the iteration part.​
 
Termination condition should be measured by delta error of sigmoid output in the end of mapper.
​  
Iteration process can be terminated after we get suitable  small value enough of the delta error.

Is there any criteria in updating delta weights?
 after calculating output of perceptron lets find the error: (oj*(1-0j)(tj-oj))
check if error is less than threshold,then delta weight is not updated else update delta weight .
Is it like that?



On Thu, Feb 12, 2015 at 5:14 PM, unmesha sreeveni <[hidden email]> wrote:

I am trying to implement Neural Network in MapReduce. Apache mahout is reffering this paper

Neural Network (NN) We focus on backpropagation By defining a network structure (we use a three layer network with two output neurons classifying the data into two categories), each mapper propagates its set of data through the network. For each training example, the error is back propagated to calculate the partial gradient for each of the weights in the network. The reducer then sums the partial gradient from each mapper and does a batch gradient descent to update the weights of the network.

Here is the worked out example for gradient descent algorithm.

Gradient Descent Learning Algorithm for Sigmoidal Perceptrons

  1. Which is the better way to parallize neural network algorithm While looking in MapReduce perspective? In mapper: Each Record owns a partial weight(from above example: w0,w1,w2),I doubt if w0 is bias. A random weight will be assigned initially and initial record calculates the output(o) and weight get updated , second record also find the output and deltaW is got updated with the previous deltaW value. While coming into reducer the sum of gradient is calculated. ie if we have 3 mappers,we will be able to get 3 w0,w1,w2.These are summed and using batch gradient descent we will be updating the weights of the network.
  2. In the above method how can we ensure that which previous weight is taken while considering more than 1 map task.Each map task has its own weight updated.How can it be accurate? enter image description here
  3. Where can I find backward propogation in the above mentioned gradient descent neural network algorithm?Or is it fine with this implementation?
  4. what is the termination condition mensioned in the algorithm?

Please help me with some pointers.

Thanks in advance.


--
Thanks & Regards

Unmesha Sreeveni U.B
Hadoop, Bigdata Developer
Centre for Cyber Security | Amrita Vishwa Vidyapeetham





--
Alpha Bagus Sunggono 



--
Thanks & Regards

Unmesha Sreeveni U.B
Hadoop, Bigdata Developer
Centre for Cyber Security | Amrita Vishwa Vidyapeetham