as inode :: My Information Node!

What is `git-all` all about?

I am not answering the question. I am asking!

Question is straightforward.

What does sudo apt-get install git-all do?

I tried to understand this myself by looking at the response of the above command on my terminal. Let’s take a look at it.

vagrant@precise64:~$ sudo apt-get install git-all
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common cvs
  cvsps emacs emacs23 emacs23-bin-common emacs23-common emacsen-common fgetty
  fontconfig fontconfig-config gawk gconf-service gconf-service-backend
  gconf2-common git git-arch git-cvs git-daemon-run git-doc git-el git-email
  git-gui git-man git-svn gitk gitweb hicolor-icon-theme libapr1 libaprutil1
  libaprutil1-dbd-sqlite3 libaprutil1-ldap libasound2 libatk1.0-0 libatk1.0-data
  libauthen-sasl-perl libavahi-client3 libavahi-common-data libavahi-common3
  libcairo2 libcroco3 libcups2 libdatrie1 libdb4.8 libdbd-sqlite3-perl
  libdbi-perl libdigest-hmac-perl libemail-valid-perl libencode-locale-perl
  liberror-perl libfile-listing-perl libfont-afm-perl libfontconfig1 libfontenc1
  libgconf-2-4 libgd2-noxpm libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-common libgif4
  libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa libgpm2 libgtk2.0-0 libgtk2.0-bin
  libgtk2.0-common libhtml-form-perl libhtml-format-perl libhtml-parser-perl
  libhtml-tagset-perl libhtml-tree-perl libhttp-cookies-perl libhttp-daemon-perl
  libhttp-date-perl libhttp-message-perl libhttp-negotiate-perl libice6
  libio-socket-inet6-perl libio-socket-ssl-perl libjasper1 libjpeg-turbo8
  libjpeg8 libllvm3.0 liblwp-mediatypes-perl liblwp-protocol-https-perl libm17n-0
  libmailtools-perl libneon27-gnutls libnet-daemon-perl libnet-dns-perl
  libnet-domain-tld-perl libnet-http-perl libnet-ip-perl libnet-smtp-ssl-perl
  libnet-ssleay-perl libotf0 libpango1.0-0 libpixman-1-0 libplrpc-perl librsvg2-2
  libsigsegv2 libsm6 libsocket6-perl libsvn-perl libsvn1 libterm-readkey-perl
  libthai-data libthai0 libtiff4 liburi-perl libutempter0 libwww-perl
  libwww-robotrules-perl libx11-xcb1 libxaw7 libxcb-glx0 libxcb-render0
  libxcb-shape0 libxcb-shm0 libxcomposite1 libxcursor1 libxdamage1 libxfixes3
  libxft2 libxi6 libxinerama1 libxmu6 libxpm4 libxrandr2 libxrender1 libxss1
  libxt6 libxtst6 libxv1 libxxf86dga1 libxxf86vm1 m17n-contrib m17n-db patch
  runit shared-mime-info ssl-cert tcl tcl8.5 tk tk8.5 tla tla-doc ttf-dejavu-core
  x11-common x11-utils xbitmaps xterm
Suggested packages:
  www-browser apache2-doc apache2-suexec apache2-suexec-custom mksh rcs
  emacs23-el aspell subversion httpd-cgi libcgi-fast-perl libasound2-plugins
  libasound2-python libgssapi-perl cups-common libgd-tools libglide3 gpm
  librsvg2-common gvfs libdata-dump-perl libjasper-runtime libcrypt-ssleay-perl
  m17n-docs ttf-baekmuk ttf-arphic-gbsn00lp ttf-arphic-bsmi00lp
  ttf-arphic-gkai00mp ttf-arphic-bkai00mp librsvg2-bin libauthen-ntlm-perl
  diffutils-doc socklog-run openssl-blacklist tclreadline ssh mesa-utils
  xfonts-cyrillic
The following NEW packages will be installed:
  apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common cvs
  cvsps emacs emacs23 emacs23-bin-common emacs23-common emacsen-common fgetty
  fontconfig fontconfig-config gawk gconf-service gconf-service-backend
  gconf2-common git git-all git-arch git-cvs git-daemon-run git-doc git-el
  git-email git-gui git-man git-svn gitk gitweb hicolor-icon-theme libapr1
  libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libasound2 libatk1.0-0
  libatk1.0-data libauthen-sasl-perl libavahi-client3 libavahi-common-data
  libavahi-common3 libcairo2 libcroco3 libcups2 libdatrie1 libdb4.8
  libdbd-sqlite3-perl libdbi-perl libdigest-hmac-perl libemail-valid-perl
  libencode-locale-perl liberror-perl libfile-listing-perl libfont-afm-perl
  libfontconfig1 libfontenc1 libgconf-2-4 libgd2-noxpm libgdk-pixbuf2.0-0
  libgdk-pixbuf2.0-common libgif4 libgl1-mesa-dri libgl1-mesa-glx libglapi-mesa
  libgpm2 libgtk2.0-0 libgtk2.0-bin libgtk2.0-common libhtml-form-perl
  libhtml-format-perl libhtml-parser-perl libhtml-tagset-perl libhtml-tree-perl
  libhttp-cookies-perl libhttp-daemon-perl libhttp-date-perl libhttp-message-perl
  libhttp-negotiate-perl libice6 libio-socket-inet6-perl libio-socket-ssl-perl
  libjasper1 libjpeg-turbo8 libjpeg8 libllvm3.0 liblwp-mediatypes-perl
  liblwp-protocol-https-perl libm17n-0 libmailtools-perl libneon27-gnutls
  libnet-daemon-perl libnet-dns-perl libnet-domain-tld-perl libnet-http-perl
  libnet-ip-perl libnet-smtp-ssl-perl libnet-ssleay-perl libotf0 libpango1.0-0
  libpixman-1-0 libplrpc-perl librsvg2-2 libsigsegv2 libsm6 libsocket6-perl
  libsvn-perl libsvn1 libterm-readkey-perl libthai-data libthai0 libtiff4
  liburi-perl libutempter0 libwww-perl libwww-robotrules-perl libx11-xcb1 libxaw7
  libxcb-glx0 libxcb-render0 libxcb-shape0 libxcb-shm0 libxcomposite1 libxcursor1
  libxdamage1 libxfixes3 libxft2 libxi6 libxinerama1 libxmu6 libxpm4 libxrandr2
  libxrender1 libxss1 libxt6 libxtst6 libxv1 libxxf86dga1 libxxf86vm1
  m17n-contrib m17n-db patch runit shared-mime-info ssl-cert tcl tcl8.5 tk tk8.5
  tla tla-doc ttf-dejavu-core x11-common x11-utils xbitmaps xterm
0 upgraded, 158 newly installed, 0 to remove and 66 not upgraded.
Need to get 71.4 MB of archives.
After this operation, 228 MB of additional disk space will be used.
Do you want to continue [Y/n]?

I am not really sure if I need all these things. I tried to google about it but I guess nobody ever thought of writing anything about it. Not even GitHub itself? Or I might have failed to search for an article online that talks about it.

Anyway, I hope somebody would find some time to add some comments here one day. And I thank that person in advance.

Quick and Dirty Introduction to NGS Data Analysis

Part-1

alt text

I will try to write this in 3 parts:

- Basic Introduction
- Linux Basics
- Data Analysis

And this is the first part.

I recently enjoyed sitting through a 3-day training program on basics of NGS Data Analysis. The training began with hands-on with essential Linux commands - grep, sed & awk. Thanks to Mr. Shantanu Kumar, the trainer.

The title of the workshop has 3 key words: NGS, Data and Analysis. All of them are cliches in present market while the term NGS is perhaps more pronounced among the biologists community. I thought I should keep a record in a blog post of what I learned there for my own benefit.

NGS stands for Next Generation Sequencing. It is one of the most trending technologies being developed for biology.

Genomics fused with the NGS technologies and Bioinformatics is a robust field resulting from an intersection of science, technology and mathematics.

More specifically, it is an intersection where biology, mathematics and information technology meet. The intersection is given many names, such as Bioinformatics, Computational Genomics, etc. Genomics is the study of genomes with the help of powerful tools of information technology. NGS forms the basis for such studies on biological side.

A genome is nothing but a patterned arrangement of 4 nucleotide bases (Adenine, Thymine, Guanine, Cytosine) represented by alphabets - A, T, G and C. The term ‘patterned’ can be deceiving. Because it is not a pattern that we humans can easily recognize. The patterns need novel and often complex deciphering algorithms to be analyzed.

Several biological macromolecules like cells and proteins, whose functions very inevitably depend on these patterns, also have a patterned behavior that can be understood using these bioinformatic tools. The patterns have standards. Such as, a human genome should always consist of the same pattern of ATGCs, except for a few minor changes. Changes that affect color of your skin, thickness of your hair, color of eye, the way you walk, etc.

In layman terms, a human genome contains 3.1 billion alphabets, or nucleotides. However, there can be situations with (at least) 3 types of basic deviation from this fact.

  • Some letters might be missing - DELETION.
  • Some new letters are added - INSERTION.
  • Or, there can be the same number of letters but with modified arrangements, like a C in place of G, etc. - ALLELE.

These deviations can be simplified to a term called MUTATION.

Why all humans don’t look the same, why they don’t have the same skin color, why some have special attributes (in biological terms, ‘traits’), what about the difference between humans and other organisms, what is cancer, how blood groups differ, etc. Questions like these can be explored by this revolutionary field of science-technology blend.

Sequencing

Sequencing is a method by which all the letters in a genome are listed out for us to analyze. The process is not as simple as it sounds here.

Illumina Inc. is one of most advanced technology companies in the world who achieved this task with highest level of accuracy. They use cut-throat technologies at the most perfect proportions and integrate IT into biology wet lab tasks for this grand pursuit. Their machines use the innovative flow cell technology that dramatically enhances their efficiency manifold.

Base Calls

Each flow cell has billions of nano-sized wells arranged in regular arrays. Genomes are collected in using these flow cells following their patented chemistry protocols. Machines are equipped with high definition cameras capable of capturing the clusters of nucleotides. The image files generated by the machine are then instantly converted to ‘.bcl’ file format which contains the 4 known alphabets using most sophisticated algorithms embedded within. This process is called Base Calling.

The .bcl files are then subjected to a process called De-Multiplexing which produce the final data in ‘FASTQ’ file format.

A sample video which explains Illumina’s breakthrough science behind flow cell technology is here and many such learning material is strictly open on their website.

Quality Check

Quality Check is the first level data analysis until you reach a step where you worry about mining data further for finding answers to your biological questions. With my limited knowledge, I cannot explain how quality scores are given but for analysis purposes, I can touch upon a few important points.

Coverage

When you are sequencing a gene or a genome, you obviously worry about how much of that gene has been covered during sequencing. Meaning, how many of its constituent bases have been identified (COVERAGE) and how many times each base is called (DEPTH). I am over-simplifying the definitions only for giving an outlook of the whole concept. These two are parameters that can be set on a sequencer to achieve the required range of sequencing. The parameters are called read length and depth. On an Illumina sequencer, the length is usually 100 while the depth can vary based on the research onjective.

Phred Scores

Any machine has a limited amount of accuracy. Illumina is just the most accurate, but not 100% accurate. Therefore, Illumina machines are embedded with algorithms that check for the quality of the sequencing data you just acquired, quality of each nucleobase called into the data table.

Apart from a quality score for each base, the overall read is also given a quality score by averaging out the scores for individual ones.

Quality scores represent the probability of a base being incorrectly sequenced by the machine.

That means, this score inherently contains the information on how accurate the data is.

For example, a Phred score of 10 represents an accuracy of 90%, with the probability of a base being incorrectly called is 1 in 10. Similarly, Phred score of 50 represents 99.999% accuracy with the error probability 1 in 100,000.

More about this quality scoring method is here.

FASTQ File

A FASTQ file is a FASTA file along with quality scores for each base sequenced. A FASTA file is the data of sequenced bases.

Below is a typical fastq file.

@HWI-ST1265:101:BC3YNHACXX:5:1101:1161:2121 1:N:0:ATCACG
GGTTAGGGTTAGGGTTAGGGTTAGGGTTAGGGTTAGG
+
C<B>ADD4@688?6?<BBB:B<BBC:B<ABG<G=@@8

Each fastq file has data points represented in the above 4 lines.

  • First line is meta data about the machine and the position of a base. Meaning for the important parts of this line are below.
   
@HWI-ST1265 Unique Instrument Name
101 Run ID
BC3YNHACXX FlowCell ID
5 FlowCell lane
1101 Tile number within FlowCell lane
1161 ‘x’=coordinate of the cluster within the tile
2121 ‘y’=coordinate of the cluster within the tile
1 The member of a pair (if paired-end)
N Y if the read fails the filter (read is bad), N otherwise
  • Second line is the sequence read.
  • Third line, only + sign, is a separator of the second and fourth lines.
  • Last line is the series of quality scores for each base in the read in the same order.

I am sure your mind is stuck on the last line. I am calling them scores and none of them seem to be anywhere close to a score. Not even a number!

If the scores go beyond a single digist, they may occupy more memory. To handle the cases where the data may be really very huge, people found a way to contrain the memory consumption. That is … repesenting each number by an ASCII character. See the ASCII table below.

You can see the corresponding ASCII character of each number from 0 to 127. But there is a small catch here. In a data file, such as in a fastq file, you cannot print what corresponds to the ASCII number 127, which is DEL. Similarly, all the characters that are from 0 to 32, and 127 cannot be used for our purposes. Therefore, the formular for calculating quality score based on the ASCII table is:

And how this QV is useful can be understood from the formula:

where QV is the quality value and P is the probability of error.

Base Quality Probability of Incorrect Base Call Base Call Accuracy
10 1 in 10 90%
20 1 in 100 99%
30 1 in 1000 99.9%
40 1 in 10000 99.99%

Therefore, if the Base quality score is 10, it means that the base call accuracy is 90%. Usually, the bases with quality score at least 25-30 are considered for further analysis.


Let’s give a break here! I will continue from the basics Linux commands useful for some preliminary data analysis in the next post.

Meanwhile, I would try this task.

Task: Count the number of A’s, T’s, G’s and C’s in the following sequence read! ;-)
AGGCAGGTCTCAAACTCCTGACCTCAGGTGATCCACCCACCTCAAGCCTCCCAAAGTGCTGGGATTATAGGCATGAGCCACCATGTCCGGCAAGTTTCTTT
CGGGGACTCCTGACAAGTGATCCACCTGCCTCGGCCTCCCAAAGTGCTGGGATTACAGACATGAGCCACCATGCCCAGCCTCCAGCCCATCATTTCTTGATGATTT
DCGCGGGGFGGTTTGCCATAAAGTGCCTGCCCTCTAGCCTCTACTCTTCCAGTTGCGGCTTATTGCATCACAGTAATTGCTGTACGAAGGTCAGAATCGCTACCTATTGTCC
AGGCAGGTCTCAAACTCCTGACCTCAGGTGATCCACCCACCTCAAGCCTCCCAAAGTGCTGGGATTATAGGCATGAGCCACCATGTCCGGCAAGTTTCTTT
CGACTCCTGACAAGTGATCCACCTGCCTCGGCCTCCCAAAGTGCTGGGATTACAGACATGAGCCACCATGCCCAGCCTCCAGCCCATCATTTCTTGATGATTT
DCGCGGGGTTTGCCATAAAGTGCCTGCCCTCTAGCCTCTACTCTTCCAGTTGCGGCTTATTGCATCACAGTAATTGCTGTACGAAGGTCAGAATCGCTACCTATTGTCC

More ... Into the Wild of Coding!

This time, I got a bit closer to the purpose and the passion behind all this workout. The Entry Scene of Mathematics.

alt text

Yet! This is going really crazy. It’s not been too long I wrote similar description of my excitement I’ve been drawing from this world of Programming. I listed there all things “coding” that became my favorites on the way. I have more to share here.

Somebody told me, > When you acquire new knowledge, do not despair and get carried away!

I can see now why he repeatedly told me this.

Ever since I chose to enter this ‘magnus’ and magnanimous world of programming and open source, I only get to become more speechless, at times go horribly paranoid, at every momentary ‘bit’ of information I pass by.

It looks to me as if this is all just one mankind’s attempt from one tiny planet in this whole universe to create one entirely new world.

I have been dwelling my way for - not out of choice though - a secure and thrilling career in my current field of corporate communications. Perhaps, this is what everyone of us struggles to achieve in terms of their career, “secure and thrilling”. The only difference is how strategic and disciplined their struggle is.

I think when your educational qualifications do not match with your profession, you are surely bluffing yourself in each passing minute. Among the past (education) and the present (profession), you will end up respecting only either of them from the bottom of your heart. Because education (especially along with the learning component ;-) is something that affects you how most addictions affect you. Something like smoking! As long as you do not know how it tastes or feels like, you are fine!

At my current work space, I keep trying to fiddle with my communications tools such as Microsoft Word, Excel and PowerPoint, where a bunch of colleagues around me, much younger than me, take their joyful and frustrating rides through coding. Whereas my work does neither frustrate me, nor excite. What frustrates me is that I don’t belong on the other side. The world of technology. When colleagues sit to talk at a casual occasion about their struggle in getting their python script to parse a whole genome sequencing data or something like that, I feel like an alien (if not like sitting among aliens – same thing anyway).

All these feelings have nothing to do with the field I am in. It is like sitting in a foreign land with native stigmata on the face. And the reason as I found through numerous sleepless nights is not that my current field is not exciting. There are so many things that excite me within corporate communications such as social media branding, competitive analysis, etc. But I still cannot claim that identity because I have already tasted something totally unrelated to this field, The Science!

Whether my skills accept it or not, I want to identify myself only with science. Paradoxically, I realize I am not as bad a stranger in this field as I think. Given my educational qualifications in strictly basic sciences, Math, Physics and Chemistry, I am certainly not a stranger here. I just needed to go to the next level. My new zeal in learning programming inspired me to re-establish my old “complicated” relationships with Mathematics and Physics.

I was reading Joe’s Introduction to regular expressions and it reminded me of the traditional Set Theory in mathematics. RegEx phrases look like set theory expressions. Some thing like writing

“{3x2+2a | x in Z & a in R}”

to describe,

a set of numbers of the form 3x2+2a such that all x belong to integers and all a belong to real numbers.

I found a lot of other elementary-level obvious connections between programming and math. So, I chose to learn programming while refreshing my math. I am currently spending at least a couple of hours each day trying to read some intro articles on elementary algebra, find roots of polynomials or memorizing essential differentiation formulas and work on writing stupid scripts for such problems in Jupyter Notebooks (previously IPython Notebooks).

If Max Tegmark says that this universe is entirely made up of mathematics, it must be true! Because this thesis has been challenging our imagination from as back in time as when Pythagoras simply said,

All things are numbers!

When I hear this word, mathematics, my heart either skips a beat or raises its speed. I am not sure what’s in this subject that does only two things to us: it either attracts us or repels. Whatever it does, it does it very strongly. The impact it creates on our minds is too deep and inescapable.

In this whole attempt, I got to know Joe of JoeQuery, who helps me by paving a learning track for me in python as a step toward Scientific Computing, my long-term goal. And my brief visits to myriad websites and google searches on topics spanning math, programming and python, I now know something called SAGE.

SAGE, System for Algebra and Geometric Experimentation, is an extremely powerful Python programming environment meticulously designed for mathematics. I went on knowing about SAGE Math Cloud where I can do n number of things-__‘math coding’__ on cloud. I also learned about SAGE Notebook environment which is similar to Jupyter Notebooks, especially the much anticipated Jupyter Hub. I came to know SAGE Interact which contains “Inspire Me” stuff, a bunch of super-scientific examples of what you can do with SAGE. This is similar to Matplotlib’s Gallery.

Just one of Joe’s blogs led me into several other very interesting posts by him, such as: - What is a function? - More intro to RegExp - A book titled Set Theory for Computer Science - Intro to Logic in 3 parts (Link-1, Link-2, Link-3)

If this whole finding is a work of just about two days, I am not able to imagine how many more other things are out there, where should I begin for my seemingly ambitious endeavor of scientific computing and what should be my strict path to reach there?

Some of the interesting and most generous Math & Coding resources I came across:

  1. Math for Kids
  2. Learn Code. Learn Math.
  3. Coding Math
  4. Math Programming Curriculum for Middle School
  5. An inspiring article on Math, Coding and Python

Into the Wild of Coding

alt text

At first, preparing myself to be lost in this wild appeared inevitable. It has its usual reasons. I was going to try something without any background, not even someone to correct me if I go wrong. Also, dealing with technology souvenirs like GitHub and godzillas like Amazon AWS is no joke. Especially when I know I am an authentic illiterate in this world. I know no programming. I just started out on python. That’s another atypical wild journey. All I have with me is an enormous amount of motivation driven by my friends who show any_time_willingness to introduce me to this brave new worlds.

I used to blog on a lot of emotional, non-interesting topics, basically to either vent off my frustrations or embrace mild satisfactions in life. (I am probably doing that here too.) Or some sort of poetry about nature and feelings, which I still respect doing for some reason.

I never bothered how many people read them. I just wanted to exploit the invincible world of open source. I started off with Blogspot, went on to set myself up with a couple blogs on wordpress and transmigrated my old blog finally to Svbtle. All I was doing was an emotional roller-coaster. What anybody would instantly hate about this ride is, it is dart silent. No noise at all.

The day I started learning programming, I learned my first lesson from my first impressions and experiences. That is, Being to the point. Having learned this, I would go back to my usual blog posts and apply my another learning. That is asking myself one right question: “What’s your point? What do you want to say?”.

Computers are really gentle in this case. They are never so harsh and sarcastic in their reactions. They care and take everything I do seriously irrespective of how silly it may be. They give chances to correct myself indefinitely. But they will not be understanding what I want to say as long as I use language of craving. It understands only when I talk in its language within its vocabulary handling capacity. I instantly saw a great difference in my attitude. My sentences are extremely shortened now. I worry no more about decorating my phrases with classy adjectives. I now know to break long sentences into shorter ones, easy to read.

Logic is all that it takes. It is not easy. But it is essential. I believe now that programming may make me a better thinker. A logical thinker. I am not there yet, I will soon be. At least, now I have my favorite tools that I use at my disposal which I consider a great upgrade of myself.

  • I use Atom for a lot of writing material such as this post. Atom and Light Table are currently my favorite editors. PyCharm is currently my favorite IDE.
  • I extensively use markdown language for all these posts.
  • I am huge fan, follower and user of IPython NoteBook.
  • I host my website where my code learning appears on Amazon EC2 instance. I also learned how to setup websites using GitHub Pages
  • I use SSH for all my EC2 site maintenance.
  • I use Pandoc and nbconvert to convert my markdown files into html.
  • I simply love GitHub, Git Gists and NBViewer for their natural complexity, like in a flower.
  • The less I speak about matplotlib, the better it is. It is meant for giants, wizards, saints and super coders. It’s what ISS is for NASA. Not meant for common man. But I love doing it at my amateurish pace, thanks to my background in physics and mathematics.
  • I regularly take pleasure of visiting noble and impeccable voluntary works such as Roguelynn’s and envy impossible works such as Jake’s.

And a lot of other stuff I am not keeping here. I am sure those very few leftouts in this world, who are as weak as I am in this field, will surely find this post very encouraging and informative. At least for me, this is a track record to evaluate myself.

I am already enjoying coding, without really learning much of it yet.

Quick Note on IPy Notebook

alt text

Lost my mind when I first learned it.

I was that lost in this (I)Python world! That much for my awe about IPython! Get a chance to learn IPython Notebook, instant excitement guaranteed!

##It’s magic.

Try these steps! For fun but motivational.

Go to this link. http://nbviewer.ipython.org/

It looks like a google search engine. In the search box, paste this link now, https://gist.github.com/asmind/8667c570b20ecae8f50d

And … Voila!

See what it is? That’s my new IPython Notebook I created while learning. It is such a magic. You know what all I needed to do?

First launch an IPython Notebook session from terminal using ipython notebook --pylab from the directory of my choice.

The --pylab imports a python library called PyLab and keeps ready for you almost all the packages sufficient for some serious scientific computing, including matplotlib. Learn what matplotlib is on this wiki page.

The command above launches a web browser where the new IPython Notebook appears, whose address bar is a local address. Click on the NEW button on the right, it creates a new notebook.

Do whatever you want to do in it including writing python code, writing markdown docs, executing normal terminal commands, etc and it is autosaved. Give a title on top (click on UNTITLED on top). Autosaved again!

Close it!

Go to the folder where it saved this file as Ex: IPy Intro.ipynb

Double click and it opens in gedit. This is actually a JSON file (I am yet to learn this part though, that doesn’t stop me from writing). Paste the entire content of this file in http://gist.github.com

Now Create a Public or Private Gist (see the options below the box where you pasted your text). This creates a Git Gist link which is what I shared above. This JSON file when you open it in the way I showed above (before Voila), opens like an IPython Notebook. This way, any number of people can collaborate on this work. Isn’t this magic?

IPython, my latest love!

This is it for now. I am losing my mind…..

##__To be Continued …. !!!