Skip to content

Partition – 70 years After 1947

Independence day article from across the border…

The Human Lens

So it is done. This August 2017 both India and Pakistan celebrated their birthdays to become 70-years old nations yet on both sides of the border somber realizations have emerged as to how far have we come really from the saga of Partition 1947. There is no denying that British India’s division was poorly planned and very brutally executed.

Nehru and Jinnah achieved debatable negotiations for our liberation from British colonizers and am told on good authority by survivors that public reaction was more of a cautious hope. The people of sub-continent were hopeful but hesitant too about their coming future.

It is outraging that British actively encouraged shoddy measures unleashing one of the worst calamities of the 20th century. The bloody sectarian ‘cleansing’ that took place is still denied across both sides of the border and still today Pakistanis and Indians refuse to take collective responsibility for our roles…

View original post 432 more words

Saluting These Brave Acid Attack Survivors

The Human Lens

The Human Lens brings a weekend edition to blow the readers minds away. Especially as you watch this video, witness how Pakistani acid attack victim women have emerged as strong survivors and despite societal backlash come out publicly as awe aspiring change agents and role models.

Meet their guru, local entrepreneur Mussarat Misbah of Depilex Smile Again Foundation speaking candidly on the issue.

The video contains sensitive video content of  many disfigured victims that today have found their space in society as self-reliant members. These and the many more survivors of acid attacks are the true heroes of Pakistan and join me in saluting them. 


All credits to: Videographer / Director: Fayyaz Adrees, Producer: Ruchika Hurria / Neha Routela / Aamir Bashir  and Editor: Sonia Estal

View original post

DON’T SHUT INDIA DOWN – Prevent government to snatch your right of using Internet

Few months back, We were having a mobile Internet shut down in Gujarat due to Patidar Anamat Andolan. That wasn’t the first attempt of government to harass innocent citizen, We face such issues few more times in the past. I remember this incident because I posted an article on my blog about it and few of my friends had a brain storming session in the comments. We were thinking about preparing “Avedan Patra” for “Inertnet Mukti Andolan” but as usual we all got busy in our daily work and no one take that idea seriously. Today in one of the episode of “on air with AIB” dated 20/03/2017, I found that some one has already prepared a petition to prevent government from shutting down Internet for such a silly reason. Internet Freedom Foundation‘s “keepusonline.in“, has taken the initiative to help all the citizen of India. So I am requesting all of you to sign the petition and let government know that how disturbing it is when they shut down our Internet.

Image Source: https://internetfreedom.in/files/keepusonline.in/images/keepusonline_logo_3x_compressed.png

Hadoop 1.2.1 Installation and Configuration on Multiple Nodes

There are few changes one has to make for multi-node setup from single node. First you need to complete single node setup up to DFS formatting step.

Mainly there are five steps to follow for multi-node setup from single node:

STEPS:

  1. SSH COPY ID to all nodes
  2. Configure masters and slaves
  3. Configure CORE-SITE.XML and MAPRED-SITE.XML
  4. Format DFS
  5. START-ALL.SH

Now I am going to explain this steps in detail:

Step-1 SSH COPY ID to all nodes:

From NAME NODE, We need to generate SSH KEY and distribute it to all the SLAVE NODES and also SECONDARY NAME NODE (if any)

Command:

ssh-copy-id -i $HOME/.ssh/id_rsa.pub hadoop@coed159

here “hadoop” is an user name and “coed159” is a system name, which you need to change according to your setup.

COPY FINGERPRINT : GIVE YES

Do the same for all DATA NODES and for SECONDARY NAME NODE (if any)

Check whether it is successfully copied or not

ssh coed159

it should not ask for password

Step-2 Configure masters and slaves:

We need to do it on NAME NODE alone (not on the DATA NODES and SECONDARY NAME NODE)

Go to NAME NODE

Command:

cd /usr/local/hadoop/conf

Find the two files: masters, and slaves

Masters for NAME NODE and SECONDARY NAME NODE

Slaves for DATA NODES

Command:

sudo nano /usr/local/hadoop/conf/masters

By default it contains ‘localhost’, Change it to the name of NAME NODE (i.e. coed161 in my case)

Ctrl + o to save

Enter

Ctrl + x to exit

sudo nano /usr/local/hadoop/conf/slaves

By default it contains ‘localhost’, Change it to contain names of all DATA NODES one per line, in my case

coed159

coed160

coed162

coed163

Ctrl + o to save

Enter

Ctrl + x to exit

Step-3 Configure CORE-SITE.XML and MAPRED-SITE.XML

go to SLAVES/SECONDARY NAME NODE and we need to make them point to the master

Command:

sudo nano /usr/local/hadoop/conf/core-site.xml

Check whether it is pointing to NAME NODE (i.e. coed161 in my case) in ‘FS.DEFAULT.NAME’, if it is pointing to localhost:10001, update localhost with coed161

Ctrl + o to save

Enter

Ctrl + x to exit

Same way for MAPRED-SITE.XML

Command:

sudo nano /usr/local/hadoop/conf/mapred-site.xml

Check whether it is pointing to the JOB TRACKER / NAME NODE (i.e. coed161, in my case)

If it is ‘localhost:10002’, update it as ‘coed161:10002’

Remove LOCAL HOST entries from /ETC/HOSTS file

Command:

sudo nano /etc/hosts

remove localhost and entries for 127.0.0.1

Step-4 Format DFS:

If converting the existing single node installation then you must delete the /USR/LOCAL/HADOOP/TMP and then create it again in all the nodes and then format it from NAME NODE alone. skip up to formatting steps if you haven’t formatted your HDFS with single node setup.

Command:

To remove directory:

sudo rm -r /usr/local/hadoop/tmp

Create tmp directory

sudo mkdir /usr/local/hadoop/tmp

Changing ownership of tmp as well as hadoop directory

sudo chown hadoop /usr/local/hadoop/tmp

sudo chown hadoop /usr/local/hadoop

Format NAME NODE

hadoop namenode -format

Check for ‘name node successfully formatted’ message

Step-5 START-ALL.SH

To start hadoop cluster with multi-node, we have to run this command from NAME NODE and it starts respective services on all NODES

Command:

start-all.sh

jps

check each system separately to find specific JVMs running on them

Check number of live nodes in web GUI (it will take few minuets)

stop-all.sh

For any queries you can write in a comment or mail me at: “brijeshbmehta@gmail.com”

Courtesy: Mr. Anand Kumar, NIT, Trichy

What an idea sirji! great idea to fulfill a dream of publishing a book. A small contribution from someone who believes in such thinking. All the best

જિતેશ દોન્ગા. જણ પહેલેથી જ તરવારિયો. રીડરબિરાદર તરીકે વર્ષોથી મારા સંપર્કમાં. હું દર વખતે જવાબ ન આપી શકું, તો ય માઠું ન લગાડે…મીઠું લગાડે. એન્જીનીયર થઈને ય અવનવા બિઝનેસ કે સાહિત્યના આઈડિયાઝ મોકલાવે. ફુરસદે વાત કરે. પૂજ્યભાવનો તો હું જ માણસ નથી, પણ એનો પ્રિયભાવ પૂરો મારા પર. વાચન સારું હોય તો લેખનના ઉભરા આવવાના […]

via નોર્થ પોલ : યુવાનીની કહાની, યુવાનની જબાની… — planetJV

That’s perfect example of “Do your duties and rest are stories!” ;)

“મેં તો મારુ બેસ્ટ આપ્યું હવે એમને ના ગમે એમાં હું શું કરું?” કેટલું સરળ લાગે નહિ? પણ જયારે અમદાવાદ માં નોકરી હોય અને ત્યાંથી આટીઘુંટી પાડીને માંડ-માંડ રજા લઈને રાજકોટ આવ્યા હોય અને પાછું રા’તે મોડી-મોડી બસ પકડી નોકરીએ જવાનું હોય ત્યારે એ શબ્દો બોલવા મારા માટે તો ખુબજ કઠિન છે. આ વાત છે […]

via “મેં તો મારુ બેસ્ટ આપ્યું હવે એમને ના ગમે એમાં હું શું કરું? — Dreams by Ab.Mehta

Agree but I personally believe that it is up to the attendee, whether he want someone to disturb him or not. Don’t allow people to expect such instant connectivity with you. Isn’t it?

દરેક વ્યક્તિ આપણે ઈચ્છીએ ત્યારે પોતાના બધા જ કામ મુકીને કનેક્ટ થવા બંધાયેલી નથી.મોબાઈલની રીંગ વાગે, મેસેજ આવે કે નોટીફીકેશન ટોન રણકે – મગજ બધુજ કામ મુકીને મોબાઈલમાં અટકી જાય. મોબાઈલ હાથમાં ઉઠાવો કે ના ઉઠાવો મગજનો એક ખૂણો એની સાથે જોડાઈ જાય અને સામેવાળો ઉપરાઉપરી રીંગ મારીને કે મેસેજ કરીને એ ખૂણો એક્ટીવ જ રાખે! ફરજીયાત કનેક્ટ થવાનું કે કનેક્ટેડ રહેવાનું એક જબરદસ્ત દબાણ હોય છે!

via દરેક વ્યક્તિ આપણે ઈચ્છીએ ત્યારે પોતાના બધા જ કામ મુકીને કનેક્ટ થવા બંધાયેલી નથી, ફરજીયાત કનેક્ટ થવાનું કે કનેક્ટેડ રહેવાનું એક જબરદસ્ત દબાણ હોય છે! — Dr.Hansal Bhachech’s Blog

Does cashless economy possible (feasible)?

Last week me and my friend went to one of the restaurant (sugar n’ spice) in Surat and surprise to see board of “We do not accept credit / debit cards”!

We discuss this issue with some of our friends. We got shocked after hearing that there are many such places in Surat (and may be in India as well) where merchants believe in such practices. We even ask for any other mode of payment but they said we only accept cash! Banks (may be RBI or Govt. of India) are not providing enough cash and they trying to promote cashless economy but such people are not helping due to some limitations from their side. I have also heard from some merchants that they have already applied for POS but banks are not providing it (even some of them have applied before 8th November!). I don’t understand who is responsible for all these problems of common man or law abiding citizens. I believe that this government is trying to build a house of cards in very windy atmosphere and one day it’ll fell to a ground with the trust of all the citizens of India. Let’s pray for innocent citizens who dies everyday because of our dirty politics.

Hadoop 1.2.1 Installation and Configuration on Single Node

I have experienced difficulties in installing and configuring Hadoop so I want to make one easy guide for installation and configuration of Hadoop 1x. I am assuming that readers have a knowledge of basic Linux commands so i am not going to explain those commands in deep.

I have used Hadoop-1.2.1, jdk-7 and Ubuntu(Linux) in our setup.

Install SSH:

  • We require SSH for remote login to different machines for Map Reduce task to run on Hadoop cluster

Commands:

  • sudo apt-get update
    • updates list of packages
  • sudo apt-get install openssh-server
    • Installs OpenSSH Server

Generate Keys:

  • Hadoop logged in to remote machines many times while running a Map Reduce task. Therefore, we need to make password less entry for Hadoop to all the nodes in our cluster.

Commands:

  • ssh
    • in write your system’s host name. It asks for password
  • ssh-keygen
    • generates SSH Keys
  • ENTER FILE NAME:
    • no need to write anything simply press enter as we want to use default settings
  • ENTER PASSPHRASE:
    • no need to write anything simply press enter as we want to use default settings
  • cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
    • Copy id_rsa.pub key to authorized keys to make password less entry for user
  • ssh
    • Now it should not ask for password

Install Java

  • I prefer offline Java installation. So I have already downloaded Java tar ball and place it in my Downloads directory

Commands:

  • sudo mkdir -p /usr/lib/jvm/
    • create directory for Java
  • sudo tar xvf ~/Downloads/jdk-7u67-linux-x64.tar.gz -C /usr/lib/jvm
    • extract and copy content of Java tar ball to Java directory
  • cd /usr/lib/jvm
    • go to Java directory
  • sudo ln -s jdk1.7.0_67 java-1.7.0-sun-amd64
    • generate symbolic link to jdk directory which will be used in Hadoop configuration
  • sudo update-alternatives –config java
    • checking and setting Java alternatives
  • sudo nano $HOME/.bashrc
    • setting Java path. Add following two lines at the end of this file
      • export JAVA_HOME=”/usr/lib/jvm/jdk1.7.0_67″
      • export PATH=”$PATH:$JAVA_HOME/bin”
  • exec bash
    • restarts bash(terminal)
  • java
    • it should not show command not found error!

Install Hadoop:

  • First we need to download required tar ball of Hadoop and place it in to home directory.

Commands:

  • sudo mkdir -p /usr/local/hadoop/
    • create Hadoop directory
  • sudo tar xvf ~/hadoop-1.2.1-bin.tar.gz -C /usr/local/hadoop
    • extract and copy Hadoop files from tar ball to Hadoop directory
  • sudo nano $HOME/.bashrc
    • setting Hadoop path. Add following lines at the end of this file
      • export HADOOP_PREFIX=/usr/local/hadoop
      • export PATH=$PATH:$HADOOP_PREFIX/bin
  • exec bash
    • restarts bash(terminal)
  • hadoop
    • it should not show command not found error!

Configuration of Hadoop

  • We are setting some environment variables and changing some configuration file according to our cluster setup.

Commands:

  • cd /usr/local/hadoop/conf
    • go to configuration directory of Hadoop
  • sudo nano hadoop-env.sh
    • open environment variable file and add following two lines at its respective place in file. They are already available in file with some different values so keep it as it is and add this lines after it respectively
      • export JAVA_HOME=/usr/lib/jvm/java-1.7.0-sun-amd64
      • export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
  • sudo nano core-site.xml
    • open HDFS configuration file and set name server address and tmp dir value. you have to use your hostname instead of “coed161″fs.default.name
      hdfs://coed161:10001
      hadoop.tmp.dir
      /usr/local/hadoop/tmp
  • sudo nano mapred-site.xml
    • Open Map Reduce configuration file and set job tracker value. you have to write your host name instead of “coed161″mapred.job.tracker
      coed161:10002
  • sudo mkdir /usr/local/hadoop/tmp
    • create tmp directory to store all files on data node
  • sudo chown /usr/local/hadoop/tmp
    • change owner of the directory to avoid access control issues. write your username instead of
  • sudo chown /usr/local/hadoop
    • change owner of the directory to avoid access control issues. write your username instead of

Format DFS (skip this step if you are going for Multi-Node setup):

  • Now we are ready to format our Distributed File System (DFS)

Command:

  • hadoop namenode -format
    • Check for the message “namenode successfully formatted”

Start all process:

  • We are ready to start our hadoop cluster (though only single node)

Commands:

  • start-all.sh
    • to start all (name node, secondary name node, data node, job tracker, task tracker)
  • jps
    • to check whether all services (i.e. name node, secondary name node, data node, job tracker, task tracker) started or not

To check cluster details on web interface:

Stop all processes:

  • If you want to stop (shut down) all your Hadoop cluster services

Command:

  • stop-all.sh

For any queries you can write in a comment or mail me at: “brijeshbmehta@gmail.com”

Courtesy: Mr. Anand Kumar, NIT, Trichy

1000 का नोट बंद करके 2000 का नोट शुरू करने से काला धन और भ्रष्टाचार बढ़ेगा

you may not agree with whole speech but you have to agree with tagline, isn’t it?

%d bloggers like this: