Archives

Hadoop + HBase + Cygwin + Windows 7 x64

In this post I will describe how to get a Hadoop environment with HBase running in Cygwin on Windows 7 x64.

Having spent the better part of a week reading through blog posts and documentation, I found that none of them covered the process in full detail, at least not for the software versions I intended to use.

This guide was written for Cygwin 1.7.7, Hadoop 0.21.0 and HBase 0.20.6.

UPDATE (Sept. 5, 2011): I no longer have this system running (switched to Ubuntu), and will most likely not be able to answer questions about the setup. I would recommend you to ask your questions on the hadoop-users mailinglist. You will find information on how to subscribe and post to the list on the Hadoop website.

UPDATE (May 25, 2011): If you are using this guide, remember to have a look at the comments, some of them concern version updates and other related issues.

UPDATE (Nov. 1, 2010): I've noticed some errors arising when using Hadoop 0.21.0 and HBase 0.20.6 and gone back to Hadoop 0.20.2 instead as this does not produce the same errors. If you intend to use HBase together with Hadoop I would recommend setting up Hadoop 0.20.2 instead, the installation is more or less identical.

You will additionally need ZooKeeper 3.3.1 in order to get HBase to run properly.

Throughout this guide I will assume that your Cygwin install path will be c:\cygwin and that Hadoop, ZooKeeper and HBase will be installed in c:\cygwin\etc\local (/etc/local/), this is however something you can choose yourself. If you choose to install Cygwin elsewhere, I would recommend to use folder names without whitespaces and other non-regular charaters.

The only prerequisite for this quite is that you have Java installed and added to your %PATH% variable (which is usually done automatically).

Software

Download each software bundle and put it somewhere where you'll easilly find it later.

CygwinCygwin

If you've never used Cygwin (or Linux/Unix/etc), you should perhaps get familiar with those environments first. If you still want to continue, read on.

Throughout the Cygwin section - if you find yourself lost, please follow Vlad Korolev's guide on how to get Cygwin up and running for Hadoop and make sure to additionally install tcp_wrappers and diffutils when chosing packages. Follow steps 2 to 4 in the guide and then continue with the Hadoop installation guide below.

If you're familiar with Cygwin you just need to make sure you have these packages installed:

  • openssh
  • openssl
  • tcp_wrappers
  • diffutils

Additionally you will have to set configure ssh to start as a service, and enable passwordless logins. To do this, fire up a Cygwin terminal window after you've completed the installation and do the following:

ssh-host-config

When asked about privilege separation answer no
When asked if sshd should be installed as a service answer yes
When asked about the CYGWIN environment variable, enter ntsec
Now go to the Services and Applications toll in Windows, locate the CYGWIN sshd service and start it.
Next Cygwin step is to set up the passwordless login. Go to your Cygwin terminal and enter

ssh-keygen

Do not use a passphrase and accept all default values. After the process is finished do the following

1
2
3
cd ~/.ssh \n
 
cat id_rsa.pub >> authorized_keys

This will add your identification key to the set of authorized keys, i.e. those that are allowed to login without entering a password.
Try connecting to localhost to see whether it works

ssh localhost

Doing this the first time should prompt you with a warning, enter yes and enter. Now try issuing the same command, this time there should be no warning and no need to enter a password.

This concludes the Cygwin installation.

HadoopHadoop

Since Vlad's guide is made for Hadoop 0.19.0, some of the configuration details specified in his guide do not apply anymore (or have moved to other files), this is an updated version what you'll find in his guide.

First - copy the downloaded tar.gz file to c:\cygwin\usr\local (which corresponds to /usr/local in the Cygwin environment). When this is done, it's time to extract the package, this is done by issuing

tar xvzf  hadoop-0.21.0.tar.gz

This command extracts the content of the downloaded hadoop file into c:\cygwin\usr\local\hadoop-0.21.0 (/usr/local/hadoop-0.21.0).
Hadoop requires some configuration, the configuration files are located in c:\cygwin\usr\local\hadoop-0.21.0\conf.
The files that need to be altered are:

core-site.xml

<property>
 <name>fs.default.name</name>
 <value>hdfs://127.0.0.1:9100</value>
</property>

mapred-site.xml

<property>
 <name>mapred.job.tracker</name>
 <value>127.0.0.1:9101</value>
</property>

and hdfs-site.xml

<property>
 <name>dfs.replication</name>
 <value>1</value>
</property>
<property>
 <name>dfs.permissions</name>
 <value>false</value>
</property>

Only Hadoop 0.21.0: Next, one line has to be added to the hadoop-config.sh file in hadoop-0.21.-0/bin

CLASSPATH=`cygpath -wp "$CLASSPATH"`

Add this line before the line containing

JAVA_LIBRARY_PATH=''

The reason for this is that in order for the CLASSPATH to be build with all the Hadoop jars (line ~120 to ~200) the path needs to be in the Cygwin format (/cygdrive/c/cygwin/usr/local/hadoop...), however in order for Java use the classpath, it needs to be in the Windows format (c:\cygwin\usr\local\hadoop..). The line transforms the Cygwin built classpath to one that is understood by Windows.

This should be enough for Hadoop to run, test the installation by issuing these commands in a Cygwin window:

cd /usr/local/hadoop-0.21.0
mkdir logs
bin/hadoop namenode -format

The last command will take some seconds to finish and should produce about 20 lines of output during the creation of the namenode filesystem.
The final step of the Hadoop setup is to start it and test it.
To start it issue the following commands in a Cygwin window:

cd /usr/local/hadoop-0.21.0
bin/start-dfs.sh
bin/start-mapred.sh

Providing no error messages are printed, this should have started Hadoop. This can be checked by opening http://localhost:9100 and http://localhost:9101 in a browser. The first link should provide information about the NameNode, make sure that the Live Nodes count is 1. The second link provides information about the cluster.
Now it's time to run a little job on the cluster to see whether or not the installation was successfull.
First, copy some files to the node:

cd /usr/local/hadoop-0.21.0
mkdir input
cp conf/&#42;.xml input
bin/hadoop jar hadoop-&#42;examples.jar grep input output 'dfs[a-z.]+'
cat output/&#42;

Provided there were no errors, you've just run your first Hadoop process.

ZooKeeperApache ZooKeeper

This step, it seems, is only necessary if you're installing the setup on 64 bit Windows.
The problem seems to be that the ZooKeeper server which comes bundled with HBase does not work correctly, and thus a standalone one needs to be set up.

Luckily the ZooKeeper install and configuration is quite easy.

First, copy the download zookeeper-3.3.1-tar.gz file to your c:\cygwin\usr\local directory, open a Cygwin window and issue the following commands:

cd /usr/local/
tar xvzf zookeeper-3.3.1.tar.gz

ZooKeeper's configuration file (zoo.cfg) is located in /usr/local/zookeeper-3.3.1/conf (c:\cygwin\usr\loca\zookeeper-3.3.1\conf).
Open the file and paste the following content into it, overwriting the original config:

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
dataDir=/tmp/zookeeper/data
# the port at which the clients will connect
clientPort=2181

Make sure to create the /tmp/zookeeper/data directory and make it writable for everyone (chmod 777).

ZooKeeper is started by typing:

cd /user/local/zookeeper-3.3.1
bin/zkServer.sh start

Make sure to test if ZooKeeper is running correctly by connecting to it

bin/zkCli.sh -server 127.0.0.1:2181

This should connect you to ZooKeeper, you can type help to see what commands are available, however the only one you need to care about is quit.

HBaseHBase

Start by copying hbase-0.20.6.tar.gz to c:\cygwin\usrl\local and extracting it by issuing

tar xvzf hbase-0.20.6.tar.gz

in a Cygwin terminal.

Now it's time to create a symlink to your JRE directory in /usr/local/. Do this by typing:

ln -s /cygdrive/c/Program\ Files/Java/<jre name> /usr/local/<jre name>

in a Cygwin terminal. <jre name> will most likely be jre6, but be sure to double check this before making the link.

HBase's configuration files are located in /usr/local/hbase-0.20.6/conf/ (C:\cygwin\usr\local\hbase-0.20.6\conf), and to get HBase up and running we need to edit hbase-env.sh and hbase-default.xml.

In the hbase-env.sh the JAVA_HOME, HBASE_IDENT_STRING and HBASE_MANAGES_ZK variables have to be set,  this is done by editing the lines containing the variable names to read:

export JAVA_HOME=/usr/local/jre6
export HBASE_IDENT_STRING=$HOSTNAME
export HBASE_MANAGES_ZK=false

The last variable tells HBase not to use the bundled ZooKeeper server, as we've already installed a stand alone one.

Next, the hbase-default.xml file has to be edited, the two properties that need to be set are hbase.rootdir and hbase.tmp.dir

<property>
 <name>hbase.rootdir</name>
 <value>file:///C:/cygwin/tmp/hbase/data</value>
 <description>The directory shared by region servers.
 Should be fully-qualified to include the filesystem to use.
 E.g: hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR
 </description>
</property>
<property>
 <name>hbase.tmp.dir</name>
 <value>C:/cygwin/tmp/hbase/tmp</value>
 <description>Temporary directory on the local filesystem.</description>
</property>

Make sure that both directories exist and are writeable by all users (chmod 777).

The command for starting HBase is:

cd /user/local/hbase-0.20.6
bin/start-hbase.sh

This section is very similar to what's found on the HBase wiki, the difference is the standalone ZooKeeper config.

Start your cluster

Having done all these steps, it's time to start up the cluster.

The startup procedure should follow this order:

  1. ZooKeeper
  2. Hadoop
  3. HBase

So what you do is:

ZooKeeper:

cd /usr/local/zookeeper-3.3.1
bin/zkServer.sh start

Hadoop:

cd /usr/local/hadoop-0.21.0
bin/start-dfs.sh
bin/start-mapred.sh

HBase:

cd /usr/bin/hbase-0.20.6
bin/start-hbase.sh
Acknowledgments

In order to get my system up and running, I used tutorials and information posted by others, this is simply an aggregation of several resources. These include:

  • Pingback: Daily del.icio.us for October 25th through October 29th — Vinny Carpenter's blog

  • Pingback: Cheatsheet: 2010 10.26 ~ 10.31 - gOODiDEA.NET

  • Pingback: Mahout on Hadoop under Cygwin « alan said

  • Ghostroberto

    I closed the cmd by mistake at the first time I set up this environment, then I opened it again and I typed "no" by asking"if privilege separation .....",there were no the other 2 questions for me.Is this a problem or just fine?

    • http://alansaid.com Alan

      try typing "ssh localhost", if you won't be asked for a password it should work. Otherwise try running ssh-host-config again.

    • http://profile.yahoo.com/2OMUQXUJ6T4NIGLS3ZXAAUNRFY amsal

      hi..
      i want to access hbase table from hadoop mapreduce....i m using windowsXP and cygwin
      i m using hadoop-0.20.2 and hbase-0.92.0
      hadoop cluster is working fine....i am able to run mapreduce wordcount successfully on 3 pc's
      hbase is also working .....i can cerate table from shell

      i have tried many examples but they are not working....when i try to compile it using
      javac Example.java

      it gives error.....
      org.apache.hadoop.hbase.client does not exist
      org.apache.hadoop.hbase does not exist
      org.apache.hadoop.hbase.io does not exist

      please can anyone help me in this......
      -plz give me some example code to access hbase from hadoop map reduce
      -also guide me how should i compile and execute it

  • Tester

    I think hbase-0.20.6 can't work with hadoop-0.21.0, isn't it? I try to start this two component togather and got the following error:
    readAndProcess threw exception java.io.IOException: Unable to read authentication method. Count of bytes read: 0

    • http://alansaid.com Alan

      Yes, there's an update about at the top of the post.
      It seems that HBase has Hadoop 0.20.2 dependencies and will not play nicely with Hadoop 0.21.0. So if you want to use HBase you should set up Hadoop 0.20.2

  • Kthomp2718282

    I am new to Hadoop and trying to install for a school project to get familiar with it. In following your guide, I got hung up at:

    bin/start-dfs.sh

    I receive the errors:
    namenode running as process 5252. Stop it first.
    ... bin/hadoop: line 258: C:/Program : no such file or directory

    My Java installation is C:/Program Files/Java/jdk-1.6.0_15.

    • http://alansaid.com Alan

      You need to convert the path to your Java installation to Cygwin's format, i.e. CLASSPATH=`cygpath -wp "$CLASSPATH"`

  • jo

    Regarding the XML file changes, core-site.xml etc . i see only an empty tag in the default file. Where should the tags be added?

    • http://alansaid.com Alan

      it should put between the and tags

  • Farsight Analytics

    The simple "grep" example has an error for hadoop-0.21.0, since the filename of the examples jar has changed. The new command should be:
    bin/hadoop jar hadoop-*-examples-0.21.0.jar grep input output 'dfs[a-z.]+'

    (note the '-0.21.0' added to the filename!)

  • Sidd1986

    when i try to type ssh-host-config on my cygwin terminal...it gives me an error.
    "creating  var/emply directory falied."
    what should i do..

    • Farsight Analytics

      couple of things I would try:
      - Create the directory by hand, set its permissions liberally so that you know it's not a permission problem
      - Check that you don't have your /var directory mapped (symbolically linked) to an odd place.

    • http://profile.yahoo.com/2OMUQXUJ6T4NIGLS3ZXAAUNRFY amsal

      hi..
      i want to access hbase table from hadoop mapreduce....i m using windowsXP and cygwin
      i m using hadoop-0.20.2 and hbase-0.92.0
      hadoop cluster is working fine....i am able to run mapreduce wordcount successfully on 3 pc's
      hbase is also working .....i can cerate table from shell

      i have tried many examples but they are not working....when i try to compile it using
      javac Example.java

      it gives error.....
      org.apache.hadoop.hbase.client does not exist
      org.apache.hadoop.hbase does not exist
      org.apache.hadoop.hbase.io does not exist

      please can anyone help me in this......
      -plz give me some example code to access hbase from hadoop map reduce
      -also guide me how should i compile and execute it

  • Swami

    When I execute bin/hadoop namenode -format I get a error 11/05/25 06:33:09 FATAL conf.Configuration: bad conf file: top-level element not I have enclosed the property tags between the configuration in both core-site.xml & hdfs-site.xmlPlease help

  • Amarjeet Dangi

    Please Help:

    Hi,  I am very new to hdoop.

    I have reached successfully till running the commands -
    bin/start-dfs.sh
    bin/start-mapred.shBut when I run these commands there is log in all .out files :
    "/cygdrive/c/cygwin/usr/local/hadoop-0.20.203.0/bin/../bin/hadoop: line 297: C:Program: command not found"

    And after that also I moved forward and then at command:

    bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'command line output is: $ bin/hadoop jar hadoop-examples-0.20.203.0.jar grep input output 'dfs[a-z.]+'bin/hadoop: line 297: C:Program: command not found11/06/15 17:29:34 INFO mapred.FileInputFormat: Total input paths to process : 1911/06/15 17:29:34 INFO mapred.JobClient: Running job: job_201106151728_000111/06/15 17:29:35 INFO mapred.JobClient:  map 0% reduce 0%And cursor keep on blinking here only for near about 18 hours. and there are few logs printed in log files.Please help!

    • Amarjeet Dangi

      Formatting the previous message:

      Hi,  I am very new to hdoop.

      I have reached successfully till running the commands -
      bin/start-dfs.sh
      bin/start-mapred.sh

      But when I run these commands there is log in all .out files :

      "/cygdrive/c/cygwin/usr/local/hadoop-0.20.203.0/bin/../bin/hadoop: line 297: C:Program: command not found"

      And after that also I moved forward and then at command:

      bin/hadoop
      jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'

      command line
      output is:

       $ bin/hadoop jar hadoop-examples-0.20.203.0.jar grep input
      output 'dfs[a-z.]+'bin/hadoop: line 297: C:Program: command not
      found
      11/06/15 17:29:34 INFO mapred.FileInputFormat: Total input paths to
      process : 19
      11/06/15 17:29:34 INFO mapred.JobClient: Running job:
      job_201106151728_0001
      11/06/15 17:29:35 INFO mapred.JobClient:  map 0%
      reduce 0%

      And cursor keep on blinking here only for near about 18 hours.
      and there are few logs printed in log files.
      Please help!

    • Farsight Analytics

      somewhere in your configuration files or environmental settings you have a path beginning with "C:Program Files ..." that isn't being parsed correctly by cygwin.

      In this situation, a program called "cygpath" is your friend. Search on "cygpath" along with hadoop and you should find some pointers on how to deal with this situation.

      I gave up trying to run hadoop on cygwin because of these problems.  It was simpler to create a linux (Ubuntu) partition for me and run hadoop there.

      My advice: Take a look at Amazon EMR.  The time you waste trying to set up your own hadoop installation would be better spent learning how to solve your real map-reduce problem.

  • Amarjeet Dangi

    Hi,

    when I start hadoop with commands start-all.sh/start-dfs.sha and start-mapred.sh there is below exception tasktracker logs:

    2011-06-22 12:48:34,096 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: /tmp/hadoop-cyg_server/mapred/local/ttprivate to 0700
        at org.apache.hadoop.fs.RawLocalFileSystem.checkReturnValue(RawLocalFileSystem.java:525)
        at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
        at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:318)
        at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
        at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:635)
        at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1328)
        at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3430)

    And I think because of above exception only when I fire the command -

    bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'window shows output :  bin/hadoop jar hadoop-examples-0.20.203.0.jar grep input output 'dfs[a-z.]+'1/06/22 12:50:23 INFO mapred.FileInputFormat: Total input paths to process : 191/06/22 12:50:23 INFO mapred.JobClient: Running job: job_201106221248_00021/06/22 12:50:24 INFO mapred.JobClient:  map 0% reduce 0%And does not move ahead of 0% progress.Please help and guide for a way to resolve this exception.

    • Amarjeet Dangi

      Just formatting the earlier post:::

      Hi,

      when I start hadoop with commands start-all.sh/start-dfs.sha and start-mapred.sh there is below exception tasktracker logs:

      2011-06-22 12:48:34,096 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: /tmp/hadoop-cyg_server/mapred/local/ttprivate to 0700
          at org.apache.hadoop.fs.RawLocalFileSystem.checkReturnValue(RawLocalFileSystem.java:525)
          at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
          at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:318)
          at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
          at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:635)
          at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1328)
          at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3430)

      And I think because of above exception only when I fire the command -

      bin/hadoop
      jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'

      window shows
      output :

       bin/hadoop jar hadoop-examples-0.20.203.0.jar grep input
      output 'dfs[a-z.]+'
      11/06/22 12:50:23 INFO mapred.FileInputFormat: Total
      input paths to process : 19
      11/06/22 12:50:23 INFO mapred.JobClient:
      Running job: job_201106221248_0002
      11/06/22 12:50:24 INFO
      mapred.JobClient:  map 0% reduce 0%

      And does not move ahead of 0%
      progress.

      Please help and guide for a way to resolve this
      exception.

  • Berlin Brown

    I am getting file permission issues I believe as well. Most of the dfs operations work but not the map reduce. It seems like the system can't write to some directories.

    http://stackoverflow.com/questions/7047945/hadoop-basic-examples-wordcount

    Error:

    2011-08-12 15:45:38,299 WARN org.apache.hadoop.mapred.TaskRunner:
    attempt_201108121544_0001_m_000008_2 : Child Error
    java.io.IOException: Task process exit with nonzero status of 127.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
    2011-08-12 15:45:38,878 WARN org.apache.hadoop.mapred.TaskLog: Failed to
    retrieve stdout log for task: attempt_201108121544_0001_m_000008_1
    java.io.FileNotFoundException:
    E:\projects\workspace_mar11\ParseLogCriticalErrors\lib\h\logs\userlogs\j
    ob_201108121544_0001\attempt_201108121544_0001_m_000008_1\log.index (The
    system cannot find the file specified)

  • Igal

    Hi,
    I have the same issue as @Amarjeet Dangi was it resolved eventually?
    Anyone knows how do solve it?

    Thanks!

  • B

    @Igal @Amarjeet
    try Hadoop 0.20.2; the issue has been reported as: https://issues.apache.org/jira/browse/HADOOP-7682

  • Zeke

    Yet another couple of newbie questions...

    In the Hadoop configuration files changes section, the IP address that is shown is a standard or should that be changed?

    Also, I get to the step to run,test the installation by issuing these commands in a Cygwin window "cd /usr/local/hadoop-0.21.0" the "mkdir logsbin/hadoop namenode - format" and get the following error:

    mkdir: cannot create directory 'logsbin/hadoop': No such file or directory.

    I would think it would be a permissions issue but I have full local access to create directories. Any thoughts?

    Thanks!

  • Gomes

    I am getting teh following error on windows with cygwin and not sure where to set the classpath and make it work

    bin/hadoop: line 297: /cygdrive/c/Program: No such file or directory
    java.lang.UnsupportedClassVersionError: Bad version number in .class file
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:621)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:124)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:260)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:56)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:195)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:188)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:268)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
    at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320)
    Exception in thread "main" -bash-4.1$

  • vartika singh

    can i get hadoop to run on windows 7 x86??

  • http://profile.yahoo.com/2OMUQXUJ6T4NIGLS3ZXAAUNRFY amsal

    hi..
    i want to access hbase table from hadoop mapreduce....i m using windowsXP and cygwin
    i m using hadoop-0.20.2 and hbase-0.92.0
    hadoop cluster is working fine....i am able to run mapreduce wordcount successfully on 3 pc's
    hbase is also working .....i can cerate table from shell

    i have tried many examples but they are not working....when i try to compile it using
    javac Example.java

    it gives error.....
    org.apache.hadoop.hbase.client does not exist
    org.apache.hadoop.hbase does not exist
    org.apache.hadoop.hbase.io does not exist

    please can anyone help me in this......
    -plz give me some example code to access hbase from hadoop map reduce
    -also guide me how should i compile and execute it

  • Fkorning

    Thanks for the great work compiling this info.  It was the only good
    lead for me in setting up 1.0.1.  But 1.0.1 is broken in a a lot of ways.

    I've managed to patch it and provide you the link here:

    https://issues.apache.org/jira/browse/HADOOP-7682?focusedCommentId=13236645#comment-13236645

    FKorning 

  • Nikzad_n

    hi
    I run 
    bin/start-dfs.sh
    bin/start-mapred.shin cygwin successfully but when I typed http://localhost:9100 in browser nothing was displayed.can you help me?my hadoop version is 1.0.1 and my windows is windows7.thanks alot

  • Edwinrichy

    i've run sshd and kegen command and finished cygwin installation... but all the command lines given in this blog is not helping me... wen i execute any these commands it says command not found or no such file or directory... how do i resove it

  • Email

    Here we go again....we can complain about microsoft but a least installing any of their products is next, next, next. I want to spend time using anaysing data instead of difficult installs...

  • souri

    Nicely explained...also try this link

  • Pingback: Distributed Computing: Links, News And Resources (3) « Angel ”Java” Lopez on Blog

  • Rajnish Kumar

    Hi everyone,I am new to big data and want to learn Hadoop,Zookeeper,Hbase,cygwin but from where I have to download the software.I am unable to locate the software for windows 7.I want to learn with myself.Please help me out.