॥ स्वक्ष ॥

To content | To menu | To search



Entries feed

2012 August 29 [Wednesday]

Contributing to Libre software

"How do I start contributing to Libre Software?" is a very common question (I asked that too) one comes across on most FOSS lists. Today, I posted the following on a private list and was asked for a public link, so here goes:


There are many Libre software projects to choose from, so choosing one project can be quite confusing when you are starting out. Do yourself a favor and take a few moments to do a SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis before you decide to jump onto the Libre software bandwagon.

Its better to give yourself time to think (or write down) which technical area or field interests you
  • Which language do you want to program in?
  • Is it front-end software or backend stuff?
  • Web programming or something else?
  • Do you like writing system software or application level software?
  • Or, do you like libraries, prefer working with algorithms/statistical applications, etc..

Once you have figured out your field of interest, its easier to shortlist something and get started on finding a project to work on.

From Failure to Success

If you are still having a hard time figuring out your interests, fret not ! Its OK to NOT know what you want when you are starting out - make sure you keep an open mind and be willing to try out new things that are unfamiliar (and sometimes hard and confusing) and/or fail at them. Failure is a good teacher!

Its what you do when you fail that distinguishes you from a successful person. If you give up your failure remains a failure BUT if you decide to practice and try again (and Again and AGAIN), you can convert your failure into success. Remember, the bridge between failure and success is Practice!


As I mentioned above, there are so many Libre software projects that its overwhelming at the outset. Having figured out your field, dont randomly visiting a bug-tracker and try to solve bugs, which is not a bad idea if you have only a few hours per week. However, if you want to wade a little deeper, try using Google to your advantage - scan the orgs at the Google Summer of Code.

This recently concluded program, has a ready list of organisations to choose from and the 2012 list is available at: http://www.google-melange.com/gsoc/program/accepted_orgs/google/gsoc2012. Besides these few hundred GSoC Orgs,
However, if you are interested in working outside of the SoC span, projects are always interested in contributors and would welcome your efforts 24x7x365. That said, these SoC tasks require a longer commitment of 40 hours per week in terms of time, so you need to decide what you want to do. Its not part-time work, so commitment and rigor is a prerequisite.


After you have searched Melange (or ESA) for keywords of your choice, visit the Ideas page for each organisation. Here, you will find a list of tasks ranked as per preference or difficulty level (NB: This entirely depends on the Org). Remember to cross-check with the Melange page if the task has already been completed via GSoC, or not.

If a task is still available, find out what is required to get started on it and prepare a short abstract. This will help you to,
  • figure out the development stack vis-a-vis your skillset,
  • realise how much time and effort is required to bridge the gap, if any ;
  • prepare a timeline estimate. (Dont obsess over the timeline as it is just an estimate and it will vary if the Org changes any requirements.)

These done, talk to the Org - always, Always, ALWAYS talk to the Org _before_ you start work on anything.
Just because a task is listed on the Ideas page does not mean its a part of their workflow (which can always change), nor is the opposite true. The best way to find out is to talk to them, first. Again, remember that these SoC tasks require a longer commitment in terms of time.


Most Libre projects have their own communication channels. This could be via Mailing lists or Forums, including IRC channels on dedicated servers or on freenode. Its important to work with them via these public channels and that means learning to communicate and not worry about asking silly (psst..there are none!) questions.
Communicating with the core developer and/or mentors and community of users is crucial - they can be an invaluable source for ideas and helpful hints. Many projects have separate lists (and IRC channels) for users and developers. Join them and introduce yourself (or lurk around to get a hang of how things work) and when you are ready, do talk about the task you want to work on.

A development mailing list, where the core developers would be available, is distinguishable via the "*-devel" mailing address. Same is true for IRC channels - If you like CLI tools, try Irssi or Quassel if you want a GUI client. Pick your poison from this list of IRC clients.


Finally, and most importantly, you must be comfortable working with the software the project uses - that means, you should be able to clone and get the software to install and work on your local machine. Here, communicating with your Org helps - You can ask for help if you have hardware or software issues, clarify installation and dependency issues, etc... No software works flawlessly (else, people would be out of jobs :)) and Libre software is no exception - the only difference being "software development on a libre / public scale".

Another aspect of getting familiar with the development stack is familiarizing yourself with the projects internal system - Since, each project uses its own bug tracker and DCVS, Wiki (for documentation), Email/Forum and IRC communication system, take some time to get familiar with each of these. If you plan to stick around for any length of time, you would be using some, or, all the software stacks they use on a regular basis.

Your transition from newbie to active contributor is a lot faster when you are comfortable with the development stack. Doing your homework before the SoC program starts will give you the confidence required to grok it enough to start working on the code-base, suggest changes or solve bugs, etc..

I hope these suggestions help you find your niche learning shell to contribute to, and of course, welcome to the Libre software. Have fun!

2011 May 3 [Tuesday]

Building packages from source tutorial by DebianWomen

DW is conducting a packaging tutorial. Do note, this is not a tutorial for upstream package building from source, but still, its worth listening to. Here is Marga's announcement::

Are you enthusiastic about Debian and thinking about contributing? We want to guide you with the basics.

We are convinced that there are a lot of people out there that want to get involved with Free Software but don't know where to start.  For Debian, the most common task you'll do as a contributor is rebuilding a package.

The Debian Women project, in collaboration with the OpenHatch project, will be holding an IRC event to help people that want to compile their first Debian package from source, and apply their first patch.

The event

On Saturday May 7th, two tutorial sessions will be held on #debian-women on irc.debian.org to help people rebuild a package for the first time.
The earlier session, suggested for those that live in Oceania, Asia, Africa and Europe, will be held at 11:00 UTC. 

The later session, suggested for those that live in the Americas, will be held at 22:00 UTC. You can find out the exact time in your timezone by using the timezone converter: http://www.timezoneconverter.com/cgi-bin/tzc.tzc

There will be people available to answer questions and generally help out with any difficulties that might arise all along the day.

More info at http://wiki.debian.org/DebianWomen/BuildItEvent

Intended audience

This event is aimed to anyone who wants to rebuild a Debian package for the first time, it's a simple task that doesn't require any previous knowledge, only a working installation of Debian (or Ubuntu, or other Debian-derived system).  We want to particularly encourage women who want to get involved, to take their first steps in
contributing to Free Software, but everybody is welcome.

More about IRC

IRC is a real-time chat system you can use to get in touch with members of the Debian community. You can connect to IRC through lots of different clients, including xchat, pidgin and konversation.

About Debian Women

The Debian Women project seeks to balance and diversify the Debian Project by actively engaging with interested women and encouraging them to become more involved with Debian. http://women.debian.org

About OpenHatch

OpenHatch is an open source community aiming to help newcomers find their way into free software projects. It works towards this goal through on-line and outreach events. This event is a reappropriation of the OpenHatch "Build it" events.


2010 September 18 [Saturday]

Gnome outreach program for women - 2010 internships

The application process for the first round of internships sponsored by the GNOME Foundation are now officially open. The dates for these internships are 2010Dec15 to 2011Mar15.

Any woman who has relevant experience and is available for a full-time internship is welcome to apply. The application deadline is 2010Oct25. As part of the application process, they are asking women to take the time to learn about the participating projects and make a contribution to the one they are interested in. These projects include ones in programming, graphic design, documentation, and marketing. For more program details, visit: http://live.gnome.org/GnomeWomen/OutreachProgram2010

Do consider applying for the internship, signing up as a mentor, or helping spread the word by encouraging woman to apply - Blog, email, dent/tweet or download this flyer (designed by Máirín Duffy) to send the information about this internship program to your local school/college/university or hand it out at conferences. All the materials for spreading the word are here: http://live.gnome.org/GnomeWomen/OutreachProgram2010/SpreadTheWord

2010 September 7 [Tuesday]

KDE-Koffice seeks code contributors

This post makes it after a long latency period that I'm not proud of. Apologies Boudewijn!

Boudewijn Rempt, CTO of KO GmbH, a company that works on open source projects around KOffice, OpenDocument and Qt is looking for Indian university contacts for establishing project channels to increase volunteer contribution to Free/Libre software.

In Europe, he has been a mentor of a student who did his MA Thesis on writing brush engines and algorithms for Krita and had visited India in May to give a training to some university students who were going to do an internship with Nokia, working on KOffice. A large part of that training was about getting involved with a free software project and he would like the opportunity to do the same for other groups of students and would love to work together with people to set up similar projects at Indian colleges, universities and with companies because when students do their internship or thesis in cooperation with a free software project, they will learn a lot that is really valuable: working together with teams, producing software that has end users, working with large, real-life code bases and the projects might gain long term contributors.

According to Boudewijn, KOffice is C++ based, very few applications can be scripted in Python and they use some Python scripts for things like quality control, but the core apps and libraries are C++. You can read the detailed build instructions and if it interests you then directly contact Boudewijn Rempt at boud@valdyas.org (private) OR boud@kogmbh.com (company address).

PS: Feel free to circulate this widely to all Indian universities and colleges as Libre Software is meant for everyone/anyone interested in learning something new.

2010 January 6 [Wednesday]

The Neuroscience of screwing up

I usually avoid reproducing articles from publications but this was a well-written piece that I wanted to etch it in my memory.

From, http://www.wired.com/magazine/2009/12/fail_accept_defeat/all/1

Accept Defeat: The Neuroscience of Screwing Up
By Jonah Lehrer | December 21, 2009  | 10:00 am  | Wired Jan 2010

It all started with the sound of static. In May 1964, two astronomers at Bell Labs, Arno Penzias and Robert Wilson, were using a radio telescope in suburban New Jersey to search the far reaches of space. Their aim was to make a detailed survey of radiation in the Milky Way, which would allow them to map those vast tracts of the universe devoid of bright stars. This meant that Penzias and Wilson needed a receiver that was exquisitely sensitive, able to eavesdrop on all the emptiness. And so they had retrofitted an old radio telescope, installing amplifiers and a calibration system to make the signals coming from space just a little bit louder.

But they made the scope too sensitive. Whenever Penzias and Wilson aimed their dish at the sky, they picked up a persistent background noise, a static that interfered with all of their observations. It was an incredibly annoying technical problem, like listening to a radio station that keeps cutting out.

At first, they assumed the noise was man-made, an emanation from nearby New York City. But when they pointed their telescope straight at Manhattan, the static didn’t increase. Another possibility was that the sound was due to fallout from recent nuclear bomb tests in the upper atmosphere. But that didn’t make sense either, since the level of interference remained constant, even as the fallout dissipated. And then there were the pigeons: A pair of birds were roosting in the narrow part of the receiver, leaving a trail of what they later  described as “white dielectric material.” The scientists evicted the pigeons and scrubbed away their mess, but the static remained, as loud as ever.

For the next year, Penzias and Wilson tried to ignore the noise, concentrating on observations that didn’t require cosmic silence or perfect precision. They put aluminum tape over the metal joints, kept the receiver as clean as possible, and hoped that a shift in the weather might clear up the interference. They waited for the seasons to change, and then change again, but the noise always remained, making it impossible to find the faint radio echoes they were looking for. Their telescope was a failure.

Kevin Dunbar is a researcher who studies how scientists study things — how they fail and succeed. In the early 1990s, he began an unprecedented research project: observing four biochemistry labs at Stanford University. Philosophers have long theorized about how science happens, but Dunbar wanted to get beyond theory. He wasn’t satisfied with abstract models of the scientific method — that seven-step process we teach schoolkids before the science fair — or the dogmatic faith scientists place in logic and objectivity. Dunbar knew that scientists often don’t think the way the textbooks say they are supposed to. He suspected that all those philosophers of science  — from Aristotle to Karl Popper — had missed something important about what goes on in the lab. (As Richard Feynman famously  quipped, “Philosophy of science is about as useful to scientists as ornithology is to birds.”) So Dunbar decided to launch an “in vivo”  investigation, attempting to learn from the messiness of real experiments.

He ended up spending the next year staring at postdocs and test tubes: The researchers were his flock, and he was the ornithologist. Dunbar brought tape recorders into meeting rooms and loitered in the hallway; he read grant proposals and the rough drafts of papers; he peeked at notebooks, attended lab meetings, and videotaped interview after interview. He spent four years analyzing the data. “I’m not sure I appreciated what I was getting myself into,” Dunbar says. “I asked for complete access, and I got it. But there was just so much to keep track of.”

Dunbar came away from his in vivo studies with an unsettling insight: Science is a deeply frustrating pursuit. Although the researchers were mostly using established techniques, more than 50 percent of their data was unexpected. (In some labs, the figure exceeded 75 percent.) “The scientists had these elaborate theories about what was supposed to happen,” Dunbar says. “But the results kept contradicting their theories. It wasn’t uncommon for someone to spend a month on a project and then just discard all their data because the data didn’t make sense.” Perhaps they hoped to see a specific protein but it wasn’t there. Or maybe their DNA sample showed the presence of an aberrant gene. The details always changed, but the story remained the same: The scientists were looking for X, but they found Y.

Dunbar was fascinated by these statistics. The scientific process, after all, is supposed to be an orderly pursuit of the truth, full of elegant hypotheses and control variables. (Twentieth-century science philosopher Thomas Kuhn, for instance, defined normal science as the kind of research in which “everything but the most esoteric detail of the result is known in advance.”) However, when experiments were observed up close — and Dunbar interviewed the scientists about even the most trifling details — this idealized version of the lab fell apart, replaced by an endless supply of disappointing surprises. There were models that didn’t work and data that couldn’t be replicated and simple studies riddled with anomalies. “These weren’t sloppy people,” Dunbar says. “They were working in some of the finest labs in the world. But experiments rarely tell us what we think they’re going to tell us. That’s the dirty secret of science.”

How did the researchers cope with all this unexpected data? How did they deal with so much failure? Dunbar realized that the vast majority of people in the lab followed the same basic strategy. First, they would blame the method. The surprising finding was classified as a mere mistake; perhaps a machine malfunctioned or an enzyme had gone stale. “The scientists were trying to explain away what they didn’t understand,” Dunbar says. “It’s as if they didn’t want to believe it.”

The experiment would then be carefully repeated. Sometimes, the weird blip would disappear, in which case the problem was solved. But the weirdness usually remained, an anomaly that wouldn’t go away. This is when things get interesting. According to Dunbar, even after scientists had generated their “error” multiple times — it was a consistent inconsistency — they might fail to follow it up. “Given the amount of unexpected data in science, it’s just not feasible to pursue everything,” Dunbar says. “People have to pick and choose what’s interesting and what’s not, but they often choose badly.” And so the result was tossed aside, filed in a quickly forgotten notebook. The scientists had discovered a new fact, but they called it a failure.

The reason we’re so resistant to anomalous information — the real reason researchers automatically assume that every unexpected result is a stupid mistake — is rooted in the way the human brain works. Over the past few decades, psychologists have dismantled the myth of objectivity. The fact is, we carefully edit our reality, searching for evidence that confirms what we already believe. Although we pretend we’re empiricists — our views dictated by nothing but the facts — we’re actually blinkered, especially when it comes to information that contradicts our theories. The problem with science, then, isn’t that most experiments fail — it’s that most failures are ignored. As he tried to further understand how people deal with dissonant data, Dunbar conducted some experiments of his own. In one 2003 study, he had undergraduates at Dartmouth College watch a couple of short videos of two different-size balls falling. The first clip showed the two balls falling at the same rate. The second clip showed the larger ball falling at a faster rate. The footage was a reconstruction of the famous (and probably apocryphal) experiment performed by Galileo, in which he dropped cannonballs of different sizes from the Tower of Pisa. Galileo’s metal balls all landed at the exact same time — a refutation of Aristotle, who claimed that heavier objects fell faster.

While the students were watching the footage, Dunbar asked them to select the more accurate representation of gravity. Not surprisingly, undergraduates without a physics background disagreed with Galileo. (Intuitively, we’re all Aristotelians.) They found the two balls falling at the same rate to be deeply unrealistic, despite the fact that it’s how objects actually behave. Furthermore, when Dunbar monitored the subjects in an fMRI machine, he found that showing non-physics majors the correct video triggered a particular pattern of brain activation: There was a squirt of blood to the anterior cingulate cortex, a collar of tissue located in the center of the brain. The ACC is typically associated with the perception of errors and contradictions — neuroscientists often refer to it as part of the “Oh shit!” circuit — so it makes sense that it would be turned on when we watch a video of something that seems wrong.

So far, so obvious: Most undergrads are scientifically illiterate. But Dunbar also conducted the experiment with physics majors. As expected, their education enabled them to see the error, and for them it was the inaccurate video that triggered the ACC. But there’s another region of the brain that can be activated as we go about editing reality. It’s called the dorsolateral prefrontal cortex, or DLPFC. It’s located just behind the forehead and is one of the last brain areas to develop in young adults. It plays a crucial role in suppressing so-called unwanted representations, getting rid of those thoughts that don’t square with our preconceptions. For scientists, it’s a problem.

When physics students saw the Aristotelian video with the aberrant balls, their DLPFCs kicked into gear and they quickly deleted the image from their consciousness. In most contexts, this act of editing is an essential cognitive skill. (When the DLPFC is damaged, people often struggle to pay attention, since they can’t filter out irrelevant stimuli.) However, when it comes to noticing anomalies, an efficient prefrontal cortex can actually be a serious liability. The DLPFC is constantly censoring the world, erasing facts from our experience. If the ACC is the “Oh shit!” circuit, the DLPFC is the Delete key. When the ACC and DLPFC “turn on together, people aren’t just noticing that something doesn’t look right,” Dunbar says. “They’re also inhibiting that information.”

The lesson is that not all data is created equal in our mind’s eye: When it comes to interpreting our experiments, we see what we want to
see and disregard the rest. The physics students, for instance, didn’t watch the video and wonder whether Galileo might be wrong. Instead, they put their trust in theory, tuning out whatever it couldn’t explain. Belief, in other words, is a kind of blindness.

How to Learn From Failure

Too often, we assume that a failed experiment is a wasted effort. But not all anomalies are useless. Here’s how to make the most of them.—J.L.

1. Check Your Assumptions : Ask yourself why this result feels like a failure. What theory does it contradict? Maybe the hypothesis failed, not the experiment.

2. Seek Out the Ignorant: Talk to people who are unfamiliar with your experiment. Explaining your work in simple terms may help you see it in a new light.

3. Encourage Diversity: If everyone working on a problem speaks the same language, then everyone has the same set of assumptions.

4. Beware of Failure-Blindness: It’s normal to filter out information that contradicts our preconceptions. The only way to avoid that bias is to be aware of it.

But this research raises an obvious question: If humans — scientists included — are apt to cling to their beliefs, why is science so successful? How do our theories ever change? How do we learn to reinterpret a failure so we can see the answer?

This was the challenge facing Penzias and Wilson as they tinkered with their radio telescope. Their background noise was still inexplicable, but it was getting harder to ignore, if only because it was always there. After a year of trying to erase the static, after assuming it was just a mechanical malfunction, an irrelevant artifact, or pigeon guano, Penzias and Wilson began exploring the possibility that it was real. Perhaps it was everywhere for a reason.

In 1918, sociologist Thorstein Veblen was commissioned by a popular magazine devoted to American Jewry to write an essay on how Jewish “intellectual productivity” would be changed if Jews were given a homeland. At the time, Zionism was becoming a potent political movement, and the magazine editor assumed that Veblen would make the obvious argument: A Jewish state would lead to an intellectual boom, as Jews would no longer be held back by institutional anti-Semitism. But Veblen, always the provocateur, turned the premise on its head. He argued instead that the scientific achievements of Jews — at the time, Albert Einstein was about to win the Nobel Prize and Sigmund Freud was a best-selling author — were due largely to their marginal status. In other words, persecution wasn’t holding the Jewish community back — it was pushing it forward.

The reason, according to Veblen, was that Jews were perpetual outsiders, which filled them with a “skeptical animus.” Because they had no vested interest in “the alien lines of gentile inquiry,” they were able to question everything, even the most cherished of assumptions. Just look at Einstein, who did much of his most radical work as a lowly patent clerk in Bern, Switzerland. According to Veblen’s logic, if Einstein had gotten tenure at an elite German university, he would have become just another physics professor with a vested interest in the space-time status quo. He would never have noticed the anomalies that led him to develop the theory of relativity.

Predictably, Veblen’s essay was potentially controversial, and not just because he was a Lutheran from Wisconsin. The magazine editor evidently was not pleased; Veblen could be seen as an apologist for anti-Semitism. But his larger point is crucial: There are advantages to thinking on the margin. When we look at a problem from the outside, we’re more likely to notice what doesn’t work. Instead of suppressing the unexpected, shunting it aside with our “Oh shit!” circuit and Delete key, we can take the mistake seriously. A new theory emerges from the ashes of our surprise.

Modern science is populated by expert insiders, schooled in narrow disciplines. Researchers have all studied the same thick textbooks, which make the world of fact seem settled. This led Kuhn, the philosopher of science, to argue that the only scientists capable of acknowledging the anomalies — and thus shifting paradigms and starting revolutions — are “either very young or very new to the field.” In other words, they are classic outsiders, naive and untenured. They aren’t inhibited from noticing the failures that point toward new possibilities.

But Dunbar, who had spent all those years watching Stanford scientists struggle and fail, realized that the romantic narrative of the brilliant and perceptive newcomer left something out. After all, most scientific change isn’t abrupt and dramatic; revolutions are rare. Instead, the epiphanies of modern science tend to be subtle and obscure and often come from researchers safely ensconced on the inside. “These aren’t Einstein figures, working from the outside,” Dunbar says. “These are the guys with big NIH grants.” How do they overcome failure-blindness?

While the scientific process is typically seen as a lonely pursuit — researchers solve problems by themselves — Dunbar found that most new scientific ideas emerged from lab meetings, those weekly sessions in which people publicly present their data. Interestingly, the most important element of the lab meeting wasn’t the presentation — it was the debate that followed. Dunbar observed that the skeptical (and sometimes heated) questions asked during a group session frequently triggered breakthroughs, as the scientists were forced to reconsider data they’d previously ignored. The new theory was a product of spontaneous conversation, not solitude; a single bracing query was enough to turn scientists into temporary outsiders, able to look anew at their own work.

But not every lab meeting was equally effective. Dunbar tells the story of two labs that both ran into the same experimental problem: The proteins they were trying to measure were sticking to a filter, making it impossible to analyze the data. “One of the labs was full of people from different backgrounds,” Dunbar says. “They had biochemists and molecular biologists and geneticists and students in medical school.” The other lab, in contrast, was made up of E. coli experts. “They knew more about E. coli than anyone else, but that was what they knew,” he says. Dunbar watched how each of these labs dealt with their protein problem. The E. coli group took a brute-force approach, spending several weeks methodically testing various fixes. “It was extremely inefficient,” Dunbar says. “They eventually solved it, but they wasted a lot of valuable time.”

The diverse lab, in contrast, mulled the problem at a group meeting. None of the scientists were protein experts, so they began a wide-ranging discussion of possible solutions. At first, the conversation seemed rather useless. But then, as the chemists traded ideas with the biologists and the biologists bounced ideas off the med students, potential answers began to emerge. “After another 10 minutes of talking, the protein problem was solved,” Dunbar says. “They made it look easy.”

When Dunbar reviewed the transcripts of the meeting, he found that the intellectual mix generated a distinct type of interaction in which the scientists were forced to rely on metaphors and analogies to express themselves. (That’s because, unlike the E. coli group, the second lab lacked a specialized language that everyone could understand.) These abstractions proved essential for problem-solving, as they  encouraged the scientists to reconsider their assumptions. Having to explain the problem to someone else forced them to think, if only for a moment, like an intellectual on the margins, filled with self-skepticism.

This is why other people are so helpful: They shock us out of our cognitive box. “I saw this happen all the time,” Dunbar says. “A scientist would be trying to describe their approach, and they’d be getting a little defensive, and then they’d get this quizzical look on their face. It was like they’d finally understood what was important.” What turned out to be so important, of course, was the unexpected result, the experimental error that felt like a failure. The answer had been there all along — it was just obscured by the imperfect theory, rendered invisible by our small-minded brain. It’s not until we talk to a colleague or translate our idea into an analogy that we glimpse the meaning in our mistake. Bob Dylan, in other words, was right: There’s no success quite like failure.

For the radio astronomers, the breakthrough was the result of a casual conversation with an outsider. Penzias had been referred by a colleague to Robert Dicke, a Princeton scientist whose training had been not in astrophysics but nuclear physics. He was best known for his work on radar systems during World War II. Dicke had since become interested in applying his radar technology to astronomy; he was especially drawn to a then-strange theory called the big bang, which postulated that the cosmos had started with a primordial explosion. Such a blast would have been so massive, Dicke argued, that it would have littered the entire universe with cosmic shrapnel, the radioactive residue of genesis. (This proposal was first made in 1948 by physicists George Gamow, Ralph Alpher, and Robert Herman, although it had been largely forgotten by the astronomical community.) The problem for Dicke was that he couldn’t find this residue using standard telescopes, so he was planning to build his own dish less than an hour’s drive south of the Bell Labs one.

Then, in early 1965, Penzias picked up the phone and called Dicke. He wanted to know if the renowned radar and radio telescope expert could help explain the persistent noise bedeviling them. Perhaps he knew where it was coming from? Dicke’s reaction was instantaneous: “Boys, we’ve been scooped!” he said. Someone else had found what he’d been searching for: the radiation left over from the beginning of the universe. It had been an incredibly frustrating process for Penzias and Wilson. They’d been consumed by the technical problem and had spent way too much time cleaning up pigeon shit — but they had finally found an explanation for the static. Their failure was the answer to a different question.

And all that frustration paid off: In 1978, they received the Nobel Prize for physics.

Contributing editor Jonah Lehrer (jonah.lehrer@gmail.com) wrote about how our friends affect our health in issue 17.10

2009 July 3 [Friday]

linuxchix-india moves to india.linuxchix.org

Its been some months since LCIN got a new home on Ketan's server and moved to a wiki at :

LC India Chapter website : http://india.linuxchix.org/
Planet : http://india.linuxchix.org/planet
Mailing list : http://mailman.linuxchix.org/mailman/listinfo/indichix
IRC : #indichix on the server irc.linuxchix.org

Well, Terri updated the official chapter page for India, so its high time Radha (i need your blog uri please !?) and Kadambari who completed the move and admin the server with Ram and Ketan helping them all along, got due credit. Thanks a ton folks - y'all r0ck :)

Lazy /me should have blogged about it earlier but i've been preoccupied with stuff. After wrestling with it for ages I finally started putting down my thoughts on all the linguistics stuff on a second blog earlier -- gee, i hardly have the patience to scribble on this blog and here i am creating another maintenance blackhole. *sigh* what was i thinking !!

Atm, in all probability it will remain a private space for me to distill my thoughts from the pen and paper khichidi which was turning me scatter-brained each time i ask someone for their opinions and thoughts.  Last week, MM and me were out and over-enthusiastic 'me' spent all my cash on some books (not that there are many great publications by Indian authors, but...duh !!), instead of that dress i wanted.

I was planning to attend the fosscomm meet in delhi, mainly as an excuse to meet Hassath, an LC'er with whom i've only corresponded online in the last 5 years. Maybe on Sunday, which gives me a day to extract my money's worth from the dead-tree versions before the trip.

2008 September 15 [Monday]


Its fantastic to see an idea grow and carried forward because other people believe in it. Thrilled to announce that we have a new addition to the LC family -- Labs in DELHI :-) YAY !!

FAT-net has agreed to host the space and bandwidth and its open to women learners only but men are allowed to conduct technical talks. Check the 'fat-net.org' website for their addres. Contact Hassath [hassath gmail com] and AjayKumar [ajuonline gmail com]for more details. With Delhi its 4 Indian metros covered ...the others being :

2.MUMBAI: At the Bigadda office in Malad. Contact Jayashree Pare [Jayashree DOT Pare # GMAIL DOT] or Warren Noronha - Email: warren.noronha AT gmail.com { Phone: +91-989-280-6204}

3.PUNE :  Contact Swatee Karpe (swateekarpe # GMAIL dot COM) for further details.

4.BANGALORE : DeepRoot Linux Pvt. Ltd., #93/4, First Floor, Nandidurga Road, Bangalore - 560 046 INDIA Phone: +91 (80) 4112 4781 / 82 / 85. Contact Mr. Abhas Abhinav or me.

2008 September 7 [Sunday]

FSF bengaluru meetup

FSF-India (aka gnu.org.in) is the force behind fending off the pro-software patent lobby in India. Thus far they have done a very good job of resisting the legislation of software patents. This was the first agenda, among other things, discussed in the planning meeting to celebrate Software Freedom Day/Week from Sep20-27.

FSUG-Bangalore met at Sunil's office and he pampered us with drinks, pastries and snacks :). If i recall all the names correctly it was Edwin, Jayakumar, Renukaprasad, Sujith, Abhas, Senthil, Anivar, Sunil, Sreedhar, and me attending yesterday..... Did I miss 2 names? sorry :-/

Among other things we had suggestions for :

0] Celebrating GNU's 25 b'day.

1] Statewide contest,

2] Say No to Software Patent Campaign debate

3] Install fests,

4] Celebrating SFD with BMS, RV, Christ, St.Joseph's college students at multiple venues.

5] Banners, posters , etc... Corporates pitching in are always welcome.

AND more ....Over the next few days things will fall into place.

2008 September 1 [Monday]

packaging 101@UDW

Ubuntu Developer Week is going on right now and Daniel Holbach kicked it off with a packaging 101, which i was attending at #ubuntu-classroom. I started to write this entry as he conducts the session. Here is the rephrased version of the live session, and if any important bits are omitted, please leave a comment to poke me.

Dholbach started off with walking us through the bare-bone structure of a source package (OT the .deb files/binary packages)>, the stuff that makes a package build and what one will encounter in almost all packages. Some links he gave:



Next, he asked folks to download a small sample package (devscripts package), which contains the tools needed for packaging. Then, used (dget http://daniel.holba.ch/motu/hello-debhelper_2.2-2.dsc ,command) to get the source package and saw a little about version's between Debian and Ubuntu, how ubuntu names the repackaged package from debian and pushes it upstream if any changes have been made and so on.

The orig.tar.gz is the original unmodified source code tarball that was released on the homepage of the upstream developers (software authors) and .diff.gz is the compressed patch we apply to make it build "our way" which means Ubuntu need to add a bunch of instructional files to be able to apply ONE build process to all kinds of source packages (no matter if they are python, PHP, C, C++, etc), which applies to most of the packages that use the auto tools (automake, autoconf, etc)

Running "dpkg-source -x hello-debhelper_2.2-2.dsc " command, extracts the tarball and apply the patch while the .dsc file is used to check the md5sum and so on (it contains a bit of meta-data about the source package).


Command : cd hello-debhelper-2.2 Debian/changelog has a very strict format to adhere to but there's the dch tool in devscripts that makes the task easier and each upload specifies: the name of the source package, the revision, the part of Ubuntu (or Debian) it is uploaded too, the changelog entry text itself and who made the particular upload, timestamp, etc... VERSION Looking at the topmost entry the upload has the revision number 2.2-2 and was uploaded to "unstable". The 2.2 (the part in front of the dash) means: this is the upstream release that was packaged.

Example : the hello-debhelper_2.2.orig.tar.gz which basically said: these are the unmodified changes that upstream released as 2.2 on their homepage. If you change a tiny bit in the package, upload 2.2-2ubuntu1, which means :

- 2.2 was released upstream,

- 2 revisions have been made in Debian,

- 1 in Ubuntu,

Then if Ubuntu packager forward the changes to the Debian maintainer, it will be incorporated in 2.2-3 to "sync" the package from Debian again.

Beware, "resetting the counter" in Ubuntu would mean "overwriting all Ubuntu changes with the changes that have happened in Debian" which is risky as one could drop other small bits that were important to Ubuntu users and might be a regression. In some cases where Ubuntu is not able to sync straight-away (different opinions of maintainers, upstream, etc) Ubuntu devels merge the changes.

Some version strings like "2:1.0~rc2-0ubuntu13" , where the "2:" is called "an epoch" and it allows you to use a lower version number again which the common use-case for reverting to an older version of the package.

Example : you maintain the package frobnicator in Ubuntu and shipped the safe but boring 2.0.0 version in hardy (say 2.0.0-0ubuntu1) but in intrepid you decide to update to 2.1.87 because the set of features sounds cool. This would change to 2.1.87-0ubuntu1 in intrepid but after getting lots and lots of bug reports from users that your software is broken you decide to go back to 2.0.0 again. Then you ship 1:2.0.0-0ubuntu1 in intrepid release and everybody would be happy again.

To summarize, the epoch should be used when upstream changes the versioning scheme and the epoch is also another way of making sure the version number is always increasing. Epochs are a nice feature, they just come with the problem that if Ubuntu decide to introduce one and the respective Debian maintainer decides to NOT use one, its a problem since new Debian revisions will always be smaller than ubuntu and we cannot "sync" any more.

DistroDiff: if you take a source from SuSE it would not work for Debian as the build process is different there.


Looking at the two stanzas, where first one is about the Source package, the following one(s) are about binary package(s). A source package needs a name, a section and a maintainer and Standards-Version gives us the clue which version of the Debian Policy (THE document to know about packaging rules) the package complies with in testing other bit: Build-Depends


Build-Depends, the bare min reqd to build packages and he explained how to extract the package, copy it into a minimal build environment (chroot containing build-essential which gives make, etc) and then install the build-depends.

He illustrated it with an Example : If dholbach uploaded a revision to the build daemons (soyuz), upstream will extract the package (just like Ubuntu did), copy it into a minimal build environment (chroot containing build-essential which gives make, etc), then install the build-depends. Having described the resulting binary packages (all files that are going to be installed into the package go into one package) he pointed out those with which have a package name and description (that turns up in synaptic, adept, etc) and has Architechture and Depends

Command: apt-cache show hello-debhelper | grep ^Depends

will return a "Depends: libc6 (= 2.5-0ubuntu1)" which means the hello-debhelper package that is in the archive needs libc6 to be installed. The build process will figure out which shared libraries the binaries (stuff in /usr/bin/ or /usr/lib/ etc) in our package are linked against and which package they are in. Dependencies anyone ???


Running out of time he moved on to copyrights and mentioned that debian/copyright is another critical part of the packaging process. Its critical for different reasons as it has little to do with the actual build process, but it makes sure each packager reflects all the copyright holders, copyrights and upstream authors in the package.

At times, there is proprietary content that can't be shipped because of licenses that forbid any changes. but one that must be paid close attention to (when you create a package from scratch) the source tarball which should ship the verbatim license texts all itself and you need to reiterate this.


Command : #!/usr/bin/make -f

This was the last part of the puzzle : it's a Makefile. If one has worked with makefiles, its easy to build targets called clean, install, build, binary-indep, binary-arch and so on. On those targets the upstream build system is "wrapped", ./configure is called, make is called, etc ...with different prefixes and in 'special' places. The dh_* scripts are all part of debhelper (remember, it's the package that is build-depended on), contains a huge set of very handy helpers to make common tasks like "install this .desktop file and register it in the right place" or "put this changelog in the right place and pretty please compress it" very very easy.

To avoid getting a piece of the source package all wrong, he recommended to start working on existing packages, fix small bugs first before moving on to other things.

Links : https://wiki.ubuntu.com/MOTU/GettingStarted for all the documentation and you can join #ubuntu-motu if you ever have any questions about packaging. After a lot of hugging JorgeCastro started his talk on "Upstream Bug Linkages" but that interesting topic belongs to another post.

- page 1 of 6