[jira] Updated: (MAHOUT-11) Static fields used throughout clustering code (Canopy, K-Means).

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jira] Updated: (MAHOUT-11) Static fields used throughout clustering code (Canopy, K-Means).

Tim Allison (Jira)

     [ https://issues.apache.org/jira/browse/MAHOUT-11?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Drew Farris updated MAHOUT-11:
------------------------------

    Attachment: MAHOUT-11-all-cleanup-20091128.patch

MAHOUT-11-all-cleanup-20091128.patch eliminates the use of static fields for configuration in the clustering code in all cases where it was present: canopy, kmeans, fuzzykmeans and meanshift. It retains Isabel's original patch to the kmeans package, with the exception of the items discussed previously, and adds similar changes to the other packages. It also includes the fix to and unit test for RandomSeedGenerator previously included.

Applied against rev 883446, all unit tests are passing, and I've run the kmeans code on real data. It would be really great if someone could double check the changes and comment.


> Static fields used throughout clustering code (Canopy, K-Means).
> ----------------------------------------------------------------
>
>                 Key: MAHOUT-11
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-11
>             Project: Mahout
>          Issue Type: Bug
>          Components: Clustering
>    Affects Versions: 0.1
>            Reporter: Dawid Weiss
>             Fix For: 0.3
>
>         Attachments: MAHOUT-11-all-cleanup-20091128.patch, MAHOUT-11-kmeans-cleanup.patch, MAHOUT-11-RandomSeedGenerator.patch, MAHOUT-11.patch
>
>
> I file this as a bug, even though I'm not 100% sure it is one. In the currect code the information is exchanged via static fields (for example, distance measure and thresholds for Canopies are static field). Is it always true in Hadoop that one job runs inside one JVM with exclusive access? I haven't seen it anywhere in Hadoop documentation and my impression was that everything uses JobConf to pass configuration to jobs, but jobs are configured on a per-object basis (a job is an object, a mapper is an object and everything else is basically an object).
> If it's possible for two jobs to run in parallel inside one JVM then this is a limitation and bug in our code that needs to be addressed.

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.