Tuesday, 24 December 2013

Hacking on MEAN stack

The Objective of this post is to get started with MEAN stack, where

M => mongodb
E  => expressjs
A  => angularjs
N  => nodejs

STEP 1 Install M(ongodb)
Mongodb is the a document database serving storage for MEAN stack.
To install it, please follow installation steps at Hacking on grails and mongodb

STEP 2 install N(odejs)
Node.js is a platform built on Chrome's JS runtime for easily building fast, scalable network apps.
To install it, please follow Hacking on node.js and geddy

STEP 3 Install E(xpressjs)
Express is a minimal and flexible node.js framework (for web application).
Assuming npm is installed, execute following command to install express.
$ npm install -g express

Seems it needs npm install express-generator -g to get express working in lates versions.

3.1 create express web app

$ express onlywallet

   create : onlywallet
   create : onlywallet/package.json
   create : onlywallet/app.js
   create : onlywallet/public
   create : onlywallet/public/images
   create : onlywallet/routes
   create : onlywallet/routes/index.js
   create : onlywallet/routes/user.js
   create : onlywallet/public/stylesheets
   create : onlywallet/public/stylesheets/style.css
   create : onlywallet/views
   create : onlywallet/views/layout.jade
   create : onlywallet/views/index.jade
   create : onlywallet/public/javascripts

   install dependencies:
     $ cd onlywallet && npm install

   run the app:
     $ node app

3.2 add mongodb driver and mongoosejs

"dependencies": {
  "express": "3.0.3",
  "jade": "*",
  "mongodb": ">= 0.9.6-7",
  "mongoose" : ">= 3.6"
}

3.3 configure app.js to connect to mongodb
var Mongoose = require('mongoose');
var db = Mongoose.createConnection('localhost', 'onlywallet');


STEP 4 install A(ngularjs) using Bower
AngularJS is for writing client-side web apps as if you had a smarter browser.

$ npm install bower –g
bower install angular#1.0.6

Just noticed http://mean.io, where MEAN comes as a single bundle within npm.

References
http://expressjs.com/guide.html

http://dandean.com/nodejs-npm-express-osx/

http://thecodebarbarian.wordpress.com/2013/07/22/introduction-to-the-mean-stack-part-one-setting-up-your-tools/

Sunday, 24 November 2013

Data flow from source to sink using Cascading 2.2.0 and Hadoop 2.2.0


Apache cascading is a abstraction layer for hadoop mapreduce. Which means I can write complex data processing and data flows using Apache cascading.

cas·cade /kasˈkād/ v. (of water) pour downward rapidly and in large quantities


I am creating a simple example that will stream the data in local Filesystem(called as Source) to destination (called Sink) using a Cascading channel(called Pipe).

[STEP 1] download and tar hadoop 2.2.0
$ wget http://www.eng.lsu.edu/mirrors/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0.tar.gz
&& mv hadoop-2.2.0.tar.gz /usr/local
&& cd /usr/local && tar -zxvf hadoop-2.2.0.tar.gz

[STEP 2] configure HADOOP_HOME
ADOOP_HOME=/usr/local/hadoop-2.2.0; export HADOOP_HOME
PATH=$HADOOP_HOME/bin:$PATH; export PATH

[STEP 3] setup gradle and then create a gradle project (part1) with following build.gradle

It includes the cascading dependencies.
repositories {
  mavenLocal()
  mavenCentral()
  mavenRepo name: 'conjars', url: 'http://conjars.org/repo/'
}


ext.cascadingVersion = '2.2.0'
ext.hadoopVersion = '1.1.2'

dependencies {
  compile( group: 'cascading', name: 'cascading-core', version: cascadingVersion )
  compile( group: 'cascading', name: 'cascading-local', version: cascadingVersion )
  compile( group: 'cascading', name: 'cascading-hadoop', version: cascadingVersion )
  providedCompile( group: 'org.apache.hadoop', name: 'hadoop-core', version: hadoopVersion )
}


jar {
  description = "Assembles a Hadoop ready jar file"
  doFirst {
    into( 'lib' ) {
      from configurations.compile
    }
  }

  manifest {
    attributes( "Main-Class": "logstream/DistributedFileCopy" )
  }

}



[STEP 4] create a flow to stream data from local file system source to sink.
package impatient;
import java.util.Properties;
import cascading.flow.Flow;
import cascading.flow.FlowDef;
import cascading.flow.hadoop.HadoopFlowConnector;
import cascading.pipe.Pipe;
import cascading.property.AppProps;
import cascading.scheme.hadoop.TextDelimited;
import cascading.tap.Tap;
import cascading.tap.hadoop.Hfs;
import cascading.tuple.Fields;

public class DistributedFileCopy {

  public static void main( String[] args ){
    String inPath         = args[ 0 ]; //data/application.log
    String outPath        = args[ 1 ];
    Properties properties = new Properties();
    AppProps.setApplicationJarClass( properties, DistributedFileCopy.class );

    HadoopFlowConnector flowConnector = new HadoopFlowConnector( properties );

    // 1. create the source tap (spout in Storm)
    Tap inTap  = new Hfs( new TextDelimited( true, "\t" ), inPath );
    // 2. create the sink tap
    Tap outTap = new Hfs( new TextDelimited( true, "\t" ), outPath );

    // 3. specify a pipe to connect the taps
    Pipe channel = new Pipe( "copy" );

    //4. connect the taps, pipes, etc., into a flow (topology in Storm)
    FlowDef flowDef = FlowDef.flowDef()
                             .addSource( channel, inTap )
                             .addTailSink( channel, outTap );
    // 5. run the flow
    flowConnector.connect( flowDef ).complete();
    }

  }



Create an input file(input stream) at data/application.log

doc_id        text

doc01        A rain shadow is a dry area on the lee back side of a mountainous area.

doc02        This sinking, dry air produces a rain shadow, or area in the lee of a mountain with less rain and cloudcover.

doc03        A rain shadow is an area of dry land that lies on the leeward (or downwind) side of a mountain.

doc04        This is known as the rain shadow effect and is the primary cause of leeward deserts of mountain ranges, such as California's Death Valley.

doc05        Two Women. Secrets. A Broken Land. [DVD Australia]

[STEP 5] run app supplying data/application.log as input file
prayag@prayag:/backup/workspace.programming/Impatient/part1$ hadoop jar build/libs/logstream.jar data/application.log output/application
13/10/27 01:20:36 INFO util.HadoopUtil: resolving application jar from found main method on: logstream.DistributedFileCopy
13/10/27 01:20:36 INFO planner.HadoopPlanner: using application jar: /backup/workspace.programming/Impatient/part1/build/libs/logstream.jar
13/10/27 01:20:36 INFO property.AppProps: using app.id: 80EA1575557B4AD6B0677A5C78D7AB73
13/10/27 01:20:37 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
13/10/27 01:20:37 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
13/10/27 01:20:37 INFO mapred.FileInputFormat: Total input paths to process : 1
13/10/27 01:20:37 INFO Configuration.deprecation: mapred.used.genericoptionsparser is deprecated. Instead, use mapreduce.client.genericoptionsparser.used
13/10/27 01:20:37 INFO Configuration.deprecation: mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
13/10/27 01:20:38 INFO util.Version: Concurrent, Inc - Cascading 2.2.0
13/10/27 01:20:38 INFO flow.Flow: [] starting
13/10/27 01:20:38 INFO flow.Flow: []  source: Hfs["TextDelimited[['doc_id', 'text']]"]["data/application.log"]
13/10/27 01:20:38 INFO flow.Flow: []  sink: Hfs["TextDelimited[['doc_id', 'text']]"]["output/application"]
13/10/27 01:20:38 INFO flow.Flow: []  parallel execution is enabled: false
13/10/27 01:20:38 INFO flow.Flow: []  starting jobs: 1
13/10/27 01:20:38 INFO flow.Flow: []  allocating threads: 1
13/10/27 01:20:38 INFO flow.FlowStep: [] starting step: (1/1) output/application
13/10/27 01:20:38 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
13/10/27 01:20:38 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
13/10/27 01:20:38 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
13/10/27 01:20:38 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
13/10/27 01:20:38 INFO mapred.FileInputFormat: Total input paths to process : 1
13/10/27 01:20:38 INFO mapreduce.JobSubmitter: number of splits:1
13/10/27 01:20:38 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
13/10/27 01:20:38 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
13/10/27 01:20:38 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
13/10/27 01:20:38 INFO Configuration.deprecation: mapred.output.key.comparator.class is deprecated. Instead, use mapreduce.job.output.key.comparator.class
13/10/27 01:20:38 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
13/10/27 01:20:38 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
13/10/27 01:20:38 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
13/10/27 01:20:38 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
13/10/27 01:20:38 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
13/10/27 01:20:38 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local966329123_0001
13/10/27 01:20:38 WARN conf.Configuration: file:/tmp/hadoop-prayag/mapred/staging/prayag966329123/.staging/job_local966329123_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
13/10/27 01:20:38 WARN conf.Configuration: file:/tmp/hadoop-prayag/mapred/staging/prayag966329123/.staging/job_local966329123_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
13/10/27 01:20:38 WARN conf.Configuration: file:/tmp/hadoop-prayag/mapred/local/localRunner/prayag/job_local966329123_0001/job_local966329123_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
13/10/27 01:20:38 WARN conf.Configuration: file:/tmp/hadoop-prayag/mapred/local/localRunner/prayag/job_local966329123_0001/job_local966329123_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.

13/10/27 01:20:38 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
13/10/27 01:20:38 INFO mapred.LocalJobRunner: OutputCommitter set in config null
13/10/27 01:20:38 INFO flow.FlowStep: [] submitted hadoop job: job_local966329123_0001
13/10/27 01:20:38 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter
13/10/27 01:20:39 INFO mapred.LocalJobRunner: Waiting for map tasks
13/10/27 01:20:39 INFO mapred.LocalJobRunner: Starting task: attempt_local966329123_0001_m_000000_0
13/10/27 01:20:39 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
13/10/27 01:20:39 INFO io.MultiInputSplit: current split input path: file:/backup/workspace.programming/Impatient/part1/data/application.log
13/10/27 01:20:39 INFO mapred.MapTask: Processing split: cascading.tap.hadoop.io.MultiInputSplit@e8848c
13/10/27 01:20:39 INFO mapred.MapTask: numReduceTasks: 0
13/10/27 01:20:39 INFO hadoop.FlowMapper: cascading version: 2.2.0
13/10/27 01:20:39 INFO hadoop.FlowMapper: child jvm opts: -Xmx200m
13/10/27 01:20:39 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
13/10/27 01:20:39 INFO hadoop.FlowMapper: sourcing from: Hfs["TextDelimited[['doc_id', 'text']]"]["data/application.log"]
13/10/27 01:20:39 INFO hadoop.FlowMapper: sinking to: Hfs["TextDelimited[['doc_id', 'text']]"]["output/application"]
13/10/27 01:20:39 INFO mapred.LocalJobRunner:
13/10/27 01:20:39 INFO mapred.Task: Task:attempt_local966329123_0001_m_000000_0 is done. And is in the process of committing
13/10/27 01:20:39 INFO mapred.LocalJobRunner:
13/10/27 01:20:39 INFO mapred.Task: Task attempt_local966329123_0001_m_000000_0 is allowed to commit now
13/10/27 01:20:39 INFO output.FileOutputCommitter: Saved output of task 'attempt_local966329123_0001_m_000000_0' to file:/backup/workspace.programming/Impatient/part1/output/rain/_temporary/0/task_local966329123_0001_m_000000
13/10/27 01:20:39 INFO mapred.LocalJobRunner: file:/backup/workspace.programming/Impatient/part1/data/application.log:0+510
13/10/27 01:20:39 INFO mapred.Task: Task 'attempt_local966329123_0001_m_000000_0' done.
13/10/27 01:20:39 INFO mapred.LocalJobRunner: Finishing task: attempt_local966329123_0001_m_000000_0
13/10/27 01:20:39 INFO mapred.LocalJobRunner: Map task executor complete.
13/10/27 01:20:44 INFO util.Hadoop18TapUtil: deleting temp path output/application/_temporary



[STEP 6] Verify Output data 
prayag@prayag:/backup/workspace.programming/Impatient/part1$ more output/application/part-00000

doc_id text

doc01 A rain shadow is a dry area on the lee back side of a mountainous area.

doc02 This sinking, dry air produces a rain shadow, or area in the lee of a mountain with less rain and cloudcover.

doc03 A rain shadow is an area of dry land that lies on the leeward (or downwind) side of a mountain.

doc04 This is known as the rain shadow effect and is the primary cause of leeward deserts of mountain ranges, such as California's Death Valley.

doc05 Two Women. Secrets. A Broken Land. [DVD Australia]



References
http://docs.cascading.org/impatient/impatient1.html
https://github.com/Cascading/Impatient/tree/master/part1

Intro to cascading=flows

Saturday, 23 November 2013

Hacking on scala play framework 2.0.6


STEP 1 Download and install play framework
$ mv play-2.0.6.zip /opt
$ cd /opt
$ sudo unzip play-2.0.6.zip
$ vi /etc/profile
PLAY_PATH=/opt/play-2.0.6; export PLAY_PATH
PATH=$PLAY_PATH:$PATH; export PATH

$ sudo chmod 777 -R play-2.0.6

STEP 2 create an app
$ play new shaharma


STEP 3 project props

$ cat project/plugins.sbt
  1 // Comment to get more information during initialization
  2 logLevel := Level.Warn
  3
  4 // The Typesafe repository
  5 resolvers += "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/"
  6
  7 // Use the Play sbt plugin for Play projects
  8 addSbtPlugin("play" % "sbt-plugin" % "2.0.6")


$ cat project/Build.scala
  1 import sbt._
  2 import Keys._
  3 import PlayProject._
  4
  5 object ApplicationBuild extends Build {
  6
  7     val appName         = "shaharma"
  8     val appVersion      = "1.0-SNAPSHOT"
  9
 10     val appDependencies = Seq(
 11       // Add your project dependencies here,
 12     )
 13
 14     val main = PlayProject(appName, appVersion, appDependencies, mainLang = SCALA).settings(
 15       // Add your own project settings here
 16     )
 17
 18 }


$ cat project/build.properties
sbt.version=0.11.3

STEP 4 run-app at port 9000
$ play run


Sunday, 17 November 2013

count lines of code (loc) using terminal


prayag@prayag:/programming//Healthcare/Webservice/src/java/com/eccount/trending/elasticsearch/rest/action/listeners/transaction$ cat *.java | grep '[;{]' | wc -l
724



Reference

Sunday, 3 November 2013

Devkota’s Twentieth Century Prometheus


In Greek mythology, Prometheus is a Titan, culture hero, and trickster figure who is credited with the creation of man from clay, and who defies the gods and gives fire to humanity (theft of fire), an act that enabled progress and civilization. He is known for his intelligence and as a champion of mankind. (wikipedia, 2013)



In 20th Century, While working on Greek mythology Laxmi Prasad Devkota has used translation as a liberating force on the one hand and captivating force on the other. In his efforts of liberating and captivating these characters,he has created a different type of myth. Devkota was endowed with unleashing poetic gifts and capacity of creating and recreating myths.By redescribing Greek myths from the perspective of his own socio-political and cultural environs, Devkota has recreated a different mythic reality. In the process of transcreation, this myth creator has become a myth for the successive generations. (Translation and Mythology A case of Devkotas Promithas - Bal Ram Adhikari, 2013)

An excerpt from Devkota's masterpiece when Prometheus stirs the human beings to chant
"बोल हो मानव , झरोस , झरोस , गिरोस , जिउस "(Translation : Say, oh humans, may Zeus fall, may he fall.)

References
http://en.wikipedia.org/wiki/Prometheus#Prometheus_in_the_Twentieth_Century
http://en.wikipedia.org/wiki/Laxmi_Prasad_Devkota
http://www.academia.edu/4657380/Translation_and_Mythology_A_case_of_Devkotas_Promithas

Wednesday, 23 October 2013

Tips for IntelliJ IDEA


Color Themes

Some of my favourite themes are

1. clone intellij-colors-solarized
$ git clone https://github.com/jkaving/intellij-colors-solarized.git
2. Goto File | Import Settings... and specify the intellij-colors-solarized directory
3. Goto Preferences | Editor | Colors & Fonts and select one of the new color themes.




2) Spacegray



3) classic eclipse -
http://color-themes.com/?view=theme&id=563a1a6b80b4acf11273ae67


http://color-themes.com/?view=theme&id=5b491e3c50544f1700232dbc



4) For, Monokai color scheme

5) For Twilight,
wget --directory-prefix=~/.IntelliJIdea13/config/colors/ https://raw.githubusercontent.com/eed3si9n/color-themes/master/IntelliJ-IDEA/Twilight/Twilight.xml


setup auto imports


[2] Editor 
[2.1] Auto scroll from source


OR add ProjectPane=true to ~/.idea/workspace.xml



[2.2] Show line numbers



[2.3] Disable autosave
Goto Settings and search "Synchro", and go to General inside of IDE Settings.




Then, Enable marking modified files with *





Optimize imports on the fly







[3] Keymap
[3.1] Delete a line : C-k from the cursor point (emacs keymap) or C-y in default

C-a C-k to delete a line


[3.2] @Override methods - C-o




[3.3] add a breakpoint - C-f8

[3.4] C Shit f12  http://stackoverflow.com/a/10990239/432903



[3.4] change keymap to emacs



[3.5] column width (<=100)




[4] change file header




Windows/ Buffers 

Switch between multiple intellij windows/projects

C-x o

Switch the buffer

C-x p/n

Kill the buffer

C-x k

Vertical selection

Alt- then drag cursor up or down

[5] Run Main class with app options




References

https://docs.google.com/document/d/1kdUrsjCOIBunzk5OYBojL-58LNguFTLl_KZqNIva3tg/edit#heading=h.bnc6aa9l7gnu

What’s Cool In IntelliJIDEA. Part I, http://refcardz.dzone.com/refcardz/intellij-idea
 ,http://sethmason.com/2007/06/29/ten-keyboard-shortcuts-for-intellij-idea.html
 ,http://arhipov.blogspot.com/2011/06/whats-cool-in-intellijidea-part-i.html
http://stackoverflow.com/a/14495004/432903

http://stackoverflow.com/a/2648361/432903

Monday, 21 October 2013

First night hack on elasticsearch (3 hrs)

ElasticSearch is (Apache Solr like) Lucene based distributed RESTful (PUT, GET, POST, DELETE) search server developed in Java.
To get motivated read a soundcloud casestudy and watch Search and Discovery at SoundCloud.

Elasticsearch in 15 minutes from David Pilato is a nice resource to follow.





Following Slide is also a useful one(with Rails tutorial - slides#44).



Now, Let's get hands dirty.

[STEP 1] download and tar elasticsearch 0.90.5
prayag@prayag:~$ wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.90.5.tar.gz

prayag@prayag:~$ tar -zxvf elasticsearch-0.90.5.tar.gz

[STEP 2] start elasticsearch node (in foreground)
prayag@prayag:~$ elasticsearch-0.90.5/bin/elasticsearch -f
[2013-10-21 23:19:19,127][INFO ][node                     ] [Aardwolf] version[0.90.5], pid[15897], build[c8714e8/2013-09-17T12:50:20Z]
[2013-10-21 23:19:19,128][INFO ][node                     ] [Aardwolf] initializing ...
[2013-10-21 23:19:19,140][INFO ][plugins                  ] [Aardwolf] loaded [], sites []
[2013-10-21 23:19:23,523][INFO ][node                     ] [Aardwolf] initialized
[2013-10-21 23:19:23,524][INFO ][node                     ] [Aardwolf] starting ...
[2013-10-21 23:19:23,800][INFO ][transport                ] [Aardwolf] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.195.8:9300]}
[2013-10-21 23:19:27,045][INFO ][cluster.service          ] [Aardwolf] new_master [Aardwolf][pK-h7KRwTWamnQNNrR2gwQ][inet[/192.168.195.8:9300]], reason: zen-disco-join (elected_as_master)
[2013-10-21 23:19:27,143][INFO ][discovery                ] [Aardwolf] elasticsearch/pK-h7KRwTWamnQNNrR2gwQ
[2013-10-21 23:19:27,233][INFO ][http                     ] [Aardwolf] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.195.8:9200]}
[2013-10-21 23:19:27,234][INFO ][node                     ] [Aardwolf] started


[2013-10-21 23:19:27,311][INFO ][gateway                  ] [Aardwolf] recovered [0] indices into cluster_state

Verify elasticsearch is running at 9200, 
First You may need to install curl ,
sudo apt-get update
sudo apt-get install curl

Then, curl following request in terminal,
$ curl -XGET 'http://localhost:9200'
{
  "ok" : true,
  "status" : 200,
  "name" : "Lady Octopus",
  "version" : {
    "number" : "0.90.5",
    "build_hash" : "c8714e8e0620b62638f660f6144831792b9dedee",
    "build_timestamp" : "2013-09-17T12:50:20Z",
    "build_snapshot" : false,
    "lucene_version" : "4.4"
  },
  "tagline" : "You Know, for Search"
}

[STEP 3] create index movies

Then, put following script in create_index.sh, and then execute the same.
curl -XPUT 'http://localhost:9200/movies/'

$ bash create_index.sh

[STEP 4] apply mapping for type 'Movie'
Put following script in movie_mapping.sh, and then execute the same.

curl -X PUT localhost:9200/movies/Movie/_mapping -d '{
    "Movie" : { "properties"  : { 
                    "title"    : { "type":"String" }, 
                    "director" : { "type":"String" }, 
                    "year"     : { "type":"long" }
                }
              }
}'

$ bash movie_mapping.sh


[STEP 5] Indexing(create/update) documents 
$ curl -XPUT http://localhost:9200/<index>/<type>/<id>,
where type(in ES) = table (in RDBMS)

Put following script in movie_document.sh, and then execute the same.
curl -XPUT 'http://localhost:9200/movies/Movie/1' -d '
{
    "title": "The Godfather",
    "director": "Francis Ford Coppola",
    "year": 1972
}'

$ bash movie_document.sh

Response we get is, 
{"ok":true,"_index":"movies","_type":"Movie","_id":"1","_version":1}

And, check the elasticsearch console with following updates.
[2013-10-22 00:12:46,835][INFO ][cluster.metadata         ] [Aardwolf] [movies] creating index, cause [auto(index api)], shards [5]/[1], mappings []
[2013-10-22 00:12:47,944][DEBUG][action.index             ] [Aardwolf] Sending mapping updated to master: index [movies] type [movie]
[2013-10-22 00:12:47,961][INFO ][cluster.metadata         ] [Aardwolf] [movies] update_mapping [movie] (dynamic)


[STEP 6.1] Retrieve document by id
Syntax : 
$ curl -XGET http://localhost:9200/<index>/<type>/<id>

Execute following command to retrieve movie with document id 1,
$ curl -XGET "http://localhost:9200/movies/Movie/1?pretty=true"
{
  "_index" : "movies",
  "_type" : "Movie",
  "_id" : "1",
  "_version" : 1,
  "exists" : true, 
"_source" : 
{
    "title": "The Godfather",
    "director": "Francis Ford Coppola",
    "year": 1972 
}
}

[STEP 6.2] Search documents by type field (title)
$ curl -XGET "localhost:9200/movies/Movie/_search?q=title:Godfather"

 {"took":22,
"timed_out":false,
"_shards":{"total":5,"successful":5,"failed":0},
"hits":{
"total":1,
"max_score":0.30685282,
"hits":
[{"_index":"movies",
"_type":"Movie",
"_id":"1",
"_score":0.30685282, 
"_source" : 
{
    "title": "The Godfather",
    "director": "Francis Ford Coppola",
    "year": 1972 
}
}]
}
}


OR Using Elasticsearch Query DSL, 





[7] Not to miss 
[7.1] 40 mins presentation by +Shay Banon 


                                   Shay Banon - ElasticSearch: Big Data, Search, and Analytics

The presentation includes following topics : 
data design patterns
The "kagillion" shards problem
Simple data flows
"Users" data flow
"time" data flow (clicks etc)
more than search
questions?
etc

[7.2] ElasticSearch, "You know, for search", Presentation by Clinton Gormley
Also watch this guy at "Getting down and dirty with Elasticsearch"





[7.3] book exploring elasticsearch, Andrew Cholakian



References
ElasticSearch 101– a getting started tutorial, http://joelabrahamsson.com/elasticsearch-101/

ElasticSearch in 5 minutes, http://www.elasticsearchtutorial.com/elasticsearch-in-5-minutes.html

http://blog.florian-hopf.de/2013/09/simple-event-analytics-with.html

http://www.javacodegeeks.com/2013/04/getting-started-with-elasticsearch.html

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-health.html

http://thediscoblog.com/blog/2013/09/03/effortless-elasticsearch-clustering/

Scaling massive elastic search clusters - Rafał Kuć - Sematext, http://www.slideshare.net/kucrafal/scaling-massive-elastic-search-clusters-rafa-ku-sematext



Shards and replicas in Elasticsearch, http://stackoverflow.com/a/15705989/432903

Friday, 11 October 2013

Existence of Mathematics


TRICK 1
http://www.youtube.com/watch?v=pWtF94cTZ6k


The mathematics behind is
STEP 1 The piles are 
Pile 1 : 10
Pile 2 : 15
Pile 3 : 15
9

STEP 2 
(Pile 1)+1(+x from Pile2)
(Pile 2)=y+1+(x’ from Pile3)
(Pile 3)=y’+1+9

STEP 3
Now, the whole deck becomes
Deck = (10+1)+(x+y+1)+(x’+y’+1)+9

Since, x+y =15(Pile2) and x'+y' =15(Pile3)
Deck = (10+1)+(15+1)+(15+1)+9
Deck = 11(position1)+16(position2)+16(position3)+9

From the bottom, positions of selected cards is,
(11, 27, 43) => (38, 22, 6) from the top.

Adding 4 cards to the bottom, positions of selected cards become
(15, 31, 47)

STEP 4
Uping and Downing, causes the cards to be left at even position ie the Downs, with following positions :
(15, 31, 47)
(  8, 16, 24),
(  4,   8, 12),­
(  2,   4,   6),
(  1,   2,   3)

TRICK 2



References
http://www.youtube.com/results?search_query=mathematics+card+trick&oq=mathematics+card+&gs_l=youtube.3.0.0l2j0i10l3j0i5i10l5.7781.20275.0.22316.17.16.1.0.0.0.457.3182.1j8j6j0j1.16.0...0.0...1ac.1.11.youtube.Chhz7mOAxu4