...replied to an important email thread, which caused me to discover a bug, which I reported, to then catch wind of a possible solution, which to attempt I had to hack my .ssh/config, to add a ProxyCommand, to go through a bastion host (which is more than just a hostname I found out today), to clone an ansible repo, hack on a python config, and wield git format-patch and interactive rebasing to squash commits, to attach a clean patch to a ticket, that got shouted out on the mailing list, and accepted during an infra freeze.

If that garbled mess of jargon didn't make any sense to you, fret not. You should find comfort in that at one point in my life--not even that long ago--it would not have made any sense to me either...

Don't be afraid to ask (lots of) questions.
Don't be afraid to get (very) productively lost.
Don't be afraid to be a (complete) beginner.
Don't be afraid.


src: and

Being born when I was was a privilege.

I feel connected to everyone. I don't feel like I am disconnected from the Greatest Generation entirely, and certainly not the boomers (my parents), nor GenerationX (my aunts/uncles/peers/mentors) or Millennials (the generation which I am on the cusp of.)

The way I feel connected to these generations of folks, is through common experience. That is why I love the internet so much, as one of the great bridges between the minds of all.

Another is Music.

And particularly, Funk.

Funk, before hiphop, and after jazz, was where remix really came into it's own. Bridging generations, smashing genre boundaries, and overall just being the kind of music that no one can deny--if you've got any funk in you, it makes you move when you hear it.

And whenever I think of the Funk I was raised on, there was no other artist that got more play during my childhood than Prince.

I can close my eyes and still see scenes of my father blaring The Hits long into the Summer Evenings, with our entire back patio full of some of the happiest adult faces I saw as a child. This was easily the happiest I can ever remember seeing my parents when they were together.

I will be *forever* thankful for being exposed to all the music and art that my family listened to together, especially the free speech and sex positive messaging of Prince, even as a youngling.

I didn't always agree with Prince's Iron Fisted attitudes on DRM and "piracy," but his music has defined an era of my life that made me who I am today.

Thank you Prince for manifesting common ground between folks of my generation, and the folks who came before us. Thank you for blurring lines and smashing boundaries--musical and otherwise.

Thank you for all the happiness you brought into my life, and my family's life, through your music.

Fedora 24 Alpha released!

The Fedora 24 Alpha is here, right on schedule for our planned June final release. Download the prerelease from our Get Fedora site:

What is the Alpha release?

The Alpha release contains all the features of Fedora 24’s editions in a form that anyone can help test. This testing, guided by the Fedora QA team, helps us target and identify bugs. When these bugs are fixed, we make a Beta release available. A Beta release is code-complete and bears a very strong resemblance to the third and final release. The final release of Fedora 24 is expected in June.

If you take the time to download and try out the Alpha, you can check and make sure the things that are important to YOU are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora users worldwide!

Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can, and your feedback improves not only Fedora, but Linux and Free software as a whole.

Fedora-Wide Changes

Under the hood, glibc has moved to 2.23. The update includes better performance, many bugfixes and improvements to POSIX compliance, and additional locales. The new library is backwards compatible with the version of glibc that was shipped in Fedora 23, and includes a number of security and bug fixes.

We’ve also updated the system compiler to GCC 6 and rebuilt all packages with that, providing greater code optimization and catching programming errors which had slipped past previous compilers.


  • Workstation features a preview of GNOME 3.20, which was released just after the Alpha was cut. The GNOME 3.20 release is already available in the Fedora 24 update stream. Once you install Fedora 24 Alpha, you can use Software or dnf to update. GNOME 3.20 will of course be part of Fedora 24 Beta and the Final release.
  • We have decided not to make Wayland, the next generation graphic stack, the default in Fedora 24 Workstation. However, Wayland remains available as an option, and the Workstation team would greatly appreciate your help in testing. Our goal is one full release where the non-default Wayland option works seamlessly, or reasonably close thereto. At that point we will make Wayland the default with X11 as the fallback option.
  • There have been many changes to theming in GTK+ 3, where a stable API has not been declared. As a result, applications that use custom CSS theming, for example, may show issues with their appearance. This may include default applications that come with Fedora 24 Alpha Workstation. Users are asked to try out their favorite GTK+ 3 based applications and report bugs upstream so they might be addressed in time for the final release.


  • FreeIPA 4.3 (Domain Controller role) is included in Fedora 24. This version helps streamline installation of replicas by adding a replica promotion method for new installs. A new topology plugin has also been added that automatically manages new replication segment creation. An effective replica topology visualization tool is also available in the webUI.
  • More packages have been removed from the default Server edition to make the footprint of the default installation smaller.


  • For Fedora 24, we’re working hard to make Fedora the best platform for developing containers, from the base Fedora container images to a full-featured PaaS to run and manage them.
  • We’re packaging OpenShift Origin for Fedora to make it easy to run on Fedora. OpenShift Origin is a distribution of Kubernetes optimized for enterprise application development and deployment. Origin embeds Kubernetes and adds powerful additional functionality to deliver an easy to approach developer and operator experience for building applications in containers.

Spins and Labs

Fedora Spins are alternative desktops for Fedora that provide a different desktop experience than the standard Fedora Workstation edition. Fedora Workstation is built on the GNOME desktop environment and aims to provide a compelling, easy-to-use operating system for software developers, while also being well-suited to other users. Our spins showcase KDE Plasma, Xfce, LXDE, Mate-Compiz, Cinnamon, and Sugar on a Stick (Soas) on the same Fedora Base.*

Fedora Labs are collections of software for specific purposes — Games, Design, Robotics, and so on. They are pre-selected sets of Fedora software and are ideal for events or audiences with the corresponding specific interest. Fedora 24 comes with a new lab, the Astronomy Spin, a set of tools for astronomers and astrophysicists.

*: Note that the SoaS spin and Security, Games, and Design Suite labs are missing from the Fedora 24 Alpha release. We plan to fix this for the Beta release.


ARM images are available as usual for several usecases. Fedora 24 ships Desktop images, such as Spins and Workstation, but also provides a Server image. A minimal Fedora image completes the wide set of install options for you ARM board.

Atomic Host

Fedora Atomic Host releases on a two-week schedule, and each release is built on the latest overall Fedora OS. This schedule means the Atomic Host is currently built on Fedora 23, but will switch to Fedora 24 when we’re out of Beta. There currently is no Fedora Atomic Host built on Fedora 24 Alpha, but we plan to have that for the Beta.

However, you can try one of the newer features with recent Fedora Atomic Host builds today. Since Fedora 23 was released, Atomic Host has added a “developer mode” that gives a better developer experience overall. When running in DEVELOPER MODE, the host will download and start Cockpit and fire up a TMUX session to make it easier to work at the console and obtain necessary information (like the root password, IP address, etc.).

Issues and Details

This is an Alpha release. As such, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in #fedora-qa on Freenode.

As testing progresses, common issues are tracked on the Common F24 Bugs page.

For tips on reporting a bug effectively, read “how to file a bug report.”

Release Schedule

The full release schedule is available on the Fedora wiki:

The current schedule calls for a beta release towards the beginning of May, and the final release in early June.

Be aware that these dates are development targets. Some projects release on a set date regardless of feature completeness or bugs; others wait until certain thresholds for functionality or testing are met. Fedora uses a hybrid model, with milestones subject to adjustment. This allows us to make releases with new features and newly-integrated and updated upstream software while also retaining high quality.

Flock 2016: Krakow, Poland

If you’re a contributor to Fedora, or interested in getting more involved, one way to engage with our community is through Fedora premier events.

The annual North American/European conference for Fedora contributors is Flock, which takes place August 2-5, 2016 in Krakow, Poland. Registration is now open at

For more information about our Latin American and Asia-Pacific Conferences, stay tuned for announcements on the Fedora Community Blog:

Announcing Fedora’s Diversity Adviser

This post was originally shared on the Announce mailing list.


As some of you may recall, Fedora added a new seat to the Fedora Council for a Diversity Adviser.

It is with great pleasure that we do hereby announce, that this seat has been filled by long-time Fedora contributor María “tatica” Leandro!

What is the Diversity Adviser?

The Fedora Diversity Adviser acts as a source of support and information for all contributors and users, especially those from underrepresented populations, so that issues of inclusion and equity can be discussed and addressed with planning and strategy.

The Fedora Diversity Adviser will lead initiatives to assess and promote equality and inclusion within the Fedora contributor and user communities, and will develop project strategy on diversity issues. The Diversity Adviser will also be the point of contact for Fedora’s participation in third-party outreach programs and events.

Interview with María, Fedora’s Diversity Adviser

To help communicate the responsibilities of the position, we asked María a few questions about being the Diversity Adviser and any goals she had as she begins her new position on the Fedora Community Blog. Here is an excerpt.

Q: How would you describe your position as Diversity Adviser in relation to the current situation in Fedora?

A: “Since this is the first time Fedora set a position like this, I see my role more as an informative one. Fedora is a quite diverse community despite what most people think. We have contributors all around the world who gather every day to create fantastic software and spread knowledge; breaking gender, language and distance barriers on a daily basis.

I also want to serve as a mediator, and let our contributors know that Fedora has ears for everyone. It’s no secret that being different is great, but sometimes that puts you on a vulnerable position (as a Latin American, Spanish Speaker and Female contributor, I can relate to some), and we want to make sure everyone feels comfortable with the
Fedora family.”

Q: What are some of your goals or vision as Fedora’s Diversity Adviser?

A: “I will start kicking a small survey to know how diverse our community is, as it’s important to me to understand the reach of our contributors, their experiences, needs and culture prior to start any project. It is no secret that to know our future actions we need to know our numbers, and because we are such a worldwide community, sometimes we have a huge lack of information about those who constantly help us be what we are. These actions will allow us to have a yearly report that will show us more in detail how our progress on diversity have worked out. This is not a life-time position, so my main goal is to leave all the needed information ready and available for those who will follow. Everything in Fedora is a team work, and the Diversity Adviser position is no different from others.

As we start to learn more from our contributors we will also be able to create programs to help each minority group. I would like this to be the second stage of the diversity action plan for 2016. Either gather once a week to practice English for about half an hour with some volunteers, to make monthly meetings where one of our contributors enlighten us with something about their culture; the idea is to spread knowledge beyond just technology.

Also having a monthly short meeting to discuss those topics that might need help (revisions on our politics, codes of conduct, an anti-harassment paper, etc.) or just someone that wants to tell their experiences. I’m interested into people knowing that Fedora has an insane cultural background and maybe in a near future, this will open the eyes of those who think that everything in Fedora is plain blue.”

Read more about Diversity Adviser

This interview originally appeared as part of a larger article on the Fedora Community Blog, titled “Women in Computing and Fedora”. You’re encouraged to give it a read and share it with others in and out of the community!

Women in Computing and Fedora

María is available weekly on Tuesdays at 12:30pm UTC in #fedora-diversity on Freenode, where you are invited to stop by and join the conversation.

Congratulations Tatica, and please all join me in giving her a warm welcome to the Council.

The post Announcing Fedora’s Diversity Adviser appeared first on Fedora Community Blog.

FOSDEM 2016: Event Report

FOSDEM 2016: Event ReportOrganizing the #DistroDevRoom


As a longtime FOSS advocate and conference-goer, I have woefully from afar followed the press and event coverage after FOSDEM for many years, wishing on my lucky stars that someday, I too might be able to attend this premier FOSS event in Europe. And this year, finally, I got the opportunity to not only attend, but to help organize the Distributions DevRoom. Devrooms are a sort of mini-track within the larger conference, and ours focused on the common problems that Linux distributions, packagers, and other developers working at grand-scale community collaboration have to face.

I have participated in talk selection for conferences before, but this was by far the deepest I had delved into a track that was so close to what I see in my role as Fedora Community Lead. Thankfully, much of the day-of technical logistics were handled by the extremely capable and helpful FOSDEM volunteers and staff. As far as the pre-conference organizing, I was not in it alone. Brian Stinson and Karanbir Singh were there through the whole process of sending the CFP, vetting proposed talks, playing scheduling bingo, and making the devroom itself run smoothly.

One of the priorities we had when accepting proposals was a plurality of topics and speakers. To make this happen we did two things: 1) Schedule mostly 20 minute blocks for talks (with very few 60+ minute blocks reserved for particularly meaty topics), and 2) offer up our own local lightning talks in the DevRoom. For the lightning talks, we started with some pre-accepted talks, and but kept an opportunity for folks to sign up ‘day-of’ at the conference itself (more on why that was a good idea later.) We think this was a winning combination, and allowed many people to participate in our track, and many projects to be represented.

Fedora by the Numbers

I wasn’t just the room’s moderator, I was also a speaker at the event! I gave a talk based on metrics gathered by members of the Fedora CommOps and Infra Teams to help describe where the project was at, and tell stories with data. Special shout-out to smooge, mattdm, threebean, and bee2502 for their amazing data gathering and visualization work. You can find many of the tools and scripts used to gather this data on the fedora-stats-tools repo on GitHub.

Fedora Decks and Presos

FOSDEM by the Numbers

This was an interesting bit of data generated during the conference that showed the distribution of operating systems accessing the FOSDEM network. All told, rumor has it there were over 8000 attendees at FOSDEM this year [Citation Needed].

Électricité De France

This was an absolutely mind-boggling presentation to watch. Here are the facts listed in the bullet points below for this Debian-based distro, built by the largest provider in Nuclear Power in the country of France, and in the world.

  • 73 billion Euros in Revenue
  • 38.5 million customers
  • 623 TWh of energy produced annually
  • 136 GW production Capacity
  • 73 Nuclear Reactors, 77% of production Nationally in France
  • 158,000 Employees

Surprise Lightning Talk: Mark Shuttleworth

I was heads down all day, and it turned out that we had somehow removed the “10 minute passing periods” that we had kept during the first day of the devroom from our schedule leading up to the lightning talks. While I was prepping speaker materials and moderating the room, I had a fellow come up to my table and ask me if there were any lightning talk slots left. Glancing quickly at the board, I saw they were full, but, in the spirit of plurality, I knew that we had kept the extra 10 minutes in our lightning talk block, and I offered to give half of it to our prospective speaker. He accepted, and I forgot to ask him for his contact info.

A few hours later, that same fellow comes up at the end of our talk and asks if they can still present. I said sure, and asked him what his name was so that I could introduce him properly. There was a pause, and I looked up from me keyboard to catch him smiling as he walked up to the front of the room and told me his name was Mark Shuttleworth.

Our next speaker, Wookey, is a long time Debian core developer of many, many, years, and he was having a bit of trouble getting his laptop to connect with the projector. Mark, like a pro, kept up his low-tech chalkboard explanation for an extra 10 minutes until the technical issues with Wookey’s laptop were resolved. As soon as the projector stopped glowing blue, Mark thanked the audience and went back to the hallway track for questions.

State of ARM

This may have been one of my favorite talks to truly appreciate how much upstream firepower had gathered into one place. My tweet below is a 140 character attempt to capture that spirit. In a nutshell, Wookey would bring up a slide with a library, tell the audience he wasn’t sure about its status, and within seconds, the maintainer of that upstream library would raise their hand, and say things to the room like “I sent that patch to the list yesterday after talking with so-and-so here at FOSDEM.” It happened at least three times, and each time it gave me even more warm-and-fuzzies to know we were bringing together so many people core to ARM development in one room at FOSDEM.

From the Twitter Stream

Measuring Action and Impact: v1.0

One of the charges of the CommOps team is to help measure the action and impact of investments of time and resources that are made in the FOSS community. This, to a large extent, includes events like FOSDEM. Because events themselves happen ‘IRL’, and not mostly in revision-controlled code repositories, tracking impact can be difficult.

But CommOps Metrics Lead Bee Padalkar is a Python and data wiz, who was glad to take up the challenge. Bee started by looking at attendees who got the FOSDEM 2016 Badge at the Fedora booth at FOSDEM, and then observed their activity on the Fedmsg Bus before and after the conference. Accounts which start at the conference and then become ongoing, active contributors can be counted as measurable conference success! This script is being made generalizable by Bee and the folks in CommOps, so that we can gather pre/post Fedmsg event activity like this in the future! Very exciting and groundbreaking work for our team.


You can read about this on the CommOps mailing list, and stay tuned for more analysis of other conferences, and of course more pretty graphs.


This may have been my first FOSDEM, but I certainly hope it will not be my last. Brussels is one of the most beautiful cities I have ever seen, and I loved being able to use my French (I did grow up close to Canada after all). FOSDEM was amazing, and I hope to see much more of it next year. I’ll leave you all with my parting tweet.

The post FOSDEM 2016: Event Report appeared first on Fedora Community Blog.


Dear Stefanie,

Your response made me want to cheer!


I totally worked my first unpaid internships in a large expensive east coast city post-undergrad, and failed at a few startups before finding my way back to do the tail-between-legs return to the parents’. I drove a tow-motor with three degrees, until I could afford a really shitty car, that could (barely) drive me to the “big” city nearby, and found my way back into the grind of academia at my alma-mater. I started as a volunteer (for *another* unpaid year), then part-time, then full-time, then double-time (grad skool + full-time) then double-plus-time (adjunct + research/organizing + full-time) and then FINALLY made it into the ideal job with the company I had always dreamed about, after six long years.

It felt great. It still feels good, everyday. I’m forever in debt to the people who helped me get here; my partner, parents, mentors, professors, and all the Free/Open Source developers who were willing to help me, as long as I kept helping everyone else.

Your story is inspiring, and one that everyone — “millennials” to “boomers” — could do to hear.


Your response also brought back a rush of regret and anxiety…

This ‘right of passage’ 

This ‘mythos of rugged individualism,’ 

This ‘academic and professional hazing’ 

That everyone is ‘supposed to go through,’ 

These “comeuppances”

They are still superbly destructive to yourself and others, and we shouldn’t keep wearing them like a badge of honor.

It is not OK to have to work two or three jobs, or give up nearly all of your weekends (not just one day of them occasionally) and many holidays to maybe get a shot at an opportunity to pull-yourself-up-by-your-bootstraps because “hey, that’s life, and that is how everyone else did it, so you gotta do it that way too.”

I did my best to build a talent pipeline for hackers like me, from where I was from, so that they wouldn’t have to work unpaid jobs like I did before they got started. I spend time as a civic hacker — identifying good work for good causes for good money for good people — so that others don’t have to deal with the same kinds of crippling debt and shame.

I don’t have all the answers, but we should do our best to help those who want to help others. There is something to be said for “learning the hard way;” not every student I’ve ever mentored has been able to grow as quickly or effectively as they would when they “hit bottom.”


Some people have had it *rilly* hard already, maybe even seeing what the bottom looks like already, and could use every bit of compassion and help they can get. I know I needed it — and I was more privileged than some of my students.

I admire your gumption, and your willingness to call out entitlement when you see it, but I still don’t wish those six years on anyone else, just because I had to go through it. 

This kind of ‘work ethic’ eventually capsized nearly all of my personal relationships with people outside of that break-neck cycle — including my core partnership at home eventually.

It was hard, and we did it, but it still isn’t OK. 

Hopefully when folks like us make it to the next level, we can follow the “campsite” rule, that is; we can leave the onboarding path to fruitful careers a little bit better than when we walked it — uphill, both ways, in three feet of snow, with no shoes on ;)

Thank you for sharing your story Stefanie.

WiTNY 2016: Event Report

January has been a rather busy month. The Women in Technology New York (WiTNY) conference was my first big conference of the new year. It is also one that falls squarely in line with one of CommOps’ big priorities for this year to increase the involvement of women and underrepresented groups within our project.

Using regular posts to social media, I was able to keep a sort of log of segments of the WiTNY journey.

Pre-WiTNY Arrival

Flying into EWR from RDU was a relatively short one-and-a-half-hour flight from Raleigh, NC.

The pre-conference prep meeting was held at the new Capital One Labs facility in Manhattan, and was the first time I had seen all of the stakeholders in the same place. During the meeting, we got to hear from the facilitators about how to help guide the discussions and workshops, and enjoy a meal with all the other volunteers and speakers.

We also got our first look at the WiTNY Program booklets, which really came out splendidly.

WiTNY: Program Book Layout

WiTNY: The Big Day

Opening Keynote


Branding Yourself with Open Source Panel

WiTNY: Ask Me Anything

Breakout Sessions

Closing Keynote

Snowzilla: The Long Journey Home

Thank You

Thank you to Capital One Labs and the organizers, speakers, sponsors, and volunteers of the WiTNY conference. I got to meet so many inspiring and aspiring women and advocates, and I hope that open source can provide you with both the tools and a direction to empower you. If ever I can be a resource, my inbox is always open.

The post WiTNY 2016: Event Report appeared first on Fedora Community Blog.

Flock 2016: Save the Date

Flock 2016 will be held in…

After four bids (again this year!) and months of comparisons and research, the Flock Planning Committee is pleased to announce that Flock 2016 will be held in Krakow, Poland from Tuesday August 2nd, through Friday August 5th.

The City of Krakow

The city of Krakow is the second largest city of Poland and is the country’s former capital. Krakow is the top tourist destination in Poland. The city basks in the glory of its long history and it greatly treasures its reputation as the culture capital of Poland.

Krakow’s seven universities plus almost twenty other institutions of higher education make it the country’s leading center of science and education. Krakow is the metropolis of southern Poland and the capital city of the Malopolska Province. The city has about 755,000 permanent residents and the Krakow conurbation totals some 1.5 million people.

Thank you Bidders!

Flock wouldn’t be possible without participation by community members careful researching bids. Even when they are not selected, bids help to shape the conversation around the conference, and set the bar for future bids, this year, and beyond.

Thank you to Pingou, Eseyman, Daja, and Chris for contributing your detailed and competitive bids in Paris, Toulouse, and Vienna.

The Krakow bid was submitted by Brian Exelbierd and Dominika Bula, and will continue to be updated as the Planning Committee confirms more details about the conference and its associated social events.

Register and submit talks now!

Information is coming soon about CFP deadlines, hotel lodging, and more. But you don’t have to wait! You can submit your talks along with your registration right away. Visit the official website for Flock 2016 now, where you can register and submit a talk.

Send any questions to (and join that list if you’d like to help plan future Flock events.)

The post Flock 2016: Save the Date appeared first on Fedora Community Blog.

Mailing List Migrations: Hyperkitty, Mailman3

Fedora <3 Hyperkitty

Hyperkitty is here

The Fedora Engineering team has been working on a new system for our mailing lists. Mailman 3 came out earlier this year and it has a new shiny web UI: Hyperkitty.

The Fedora Hosted lists will be migrated on November 16th, and the Fedora Project lists later in the week. After migration you should be able to use the new Hyperkitty UI to post and read the lists if you choose or continue to get emails in the traditional way.

Changes in headers and other features

There may be some changes in some headers, so if you filter your list emails be ready to adjust your filters. See wiki page below for details:

Some lists using mailman2 features not yet available in mailman3 will be migrated later. More information as well as current lists migrated, being migrated and deferred for migration can be found at:

Hyperkitty migration help

If you have any questions, feel free to ask on the Infrastructure list.

If you find a problem or issue, please file a Fedora Infrastructure ticket and we will work to fix things for your case or bug.

— The Fedora Infrastructure team

The post Mailing List Migrations: Hyperkitty, Mailman3 appeared first on Fedora Community Blog.

FOSDEM 2016 Distro Devroom: Call for Participation

FOSDEM 2016 – Distributions Devroom Call for Participation Logo

The FOSDEM Distro Devroom will take place 30 & 31 January, 2016 at FOSDEM, in room K.4.201 at Université Libre de Bruxelles, in Brussels, Belgium.

As Linux distributions converge on similar tools, the problem space overlapping different distributions is growing. This standardization across the distributions presents an opportunity to develop generic solutions to the problems of aggregating, building, and maintaining the pieces that go into a distribution.

We welcome submissions targeted at developers interested in issues unique to
distributions, especially in the following topics:

  • Cross-distribution collaboration issues, eg: content distribution and documentation
  • Vendor relationships (eg. cloud providers, non-commodity hardware vendors etc )
  • Future of distributions, emerging trends, evolving user demands of a platform
  • User experience management ( onboarding new users, facilitating technical growth, user to contribution transitions etc )
  • Building trust and code relationships with the upstream components of a distribution
  • Solving problems like package  and content management (rpm/dpkg/ostree/coreos)
  • Contributor resource management, centralised trust management, key trust etc
  • Integration technologies like installers, deployment facilitation ( eg. cloud contextualisation )

Submissions may be in the form of 30-55 minute talks, panel sessions, round-table discussions, Birds of a Feather (BoF) sessions or lightning talks.


  • Submission Deadline: 10th Dec 2015
  • Acceptance Notification: 15th Dec 2015
  • Final Schedule Posted: 17th Dec 2015

How to submit


  1. If you do not have an account, create one here
  2. Click ‘Create Event’
  3. Enter your presentation details
  4. Be sure to select the Distributions Devroom track!
  5. Submit

What to include

  • The title of your submission
  • 1-paragraph Abstract
  • Longer description, including the benefit of your talk to your target audience
  • Approximate length / type of submission (talk, BoF, …)
  • Links to related websites/blogs/talk material (if any)

If you have any questions, feel free to contact the devroom organizers:
distributions-devroom at

This message brought to you by

Karanbir Singh (twitter: @kbsingh) and Brian Stinson (twitter: @bstinsonmhk) for and on behalf of The Distributions Devroom Program Committee

The post FOSDEM 2016 Distro Devroom: Call for Participation appeared first on Fedora Community Blog.

Last call for Flock 2016 bids

Flock 2016 bids needed – submit now!

Flock 2016 planning is in progress! Flock is the annual conference for Fedora contributors to come together, discuss new ideas, work to make those ideas a reality, and continue to promote the foundations of the Fedora Community: Freedom, Friends, Features, and First.

Each year, project leadership works with community members who submit bids to bring Flock to their city. Flock alternates between a North American venue and an European venue each year. More Europe proposals are needed soon for next year’s event. So far, there are two bids in France, but for budget reasons, it does not seem like they will be possible. There is another bid in progress for Vienna, Austria.

It’s not too late to get a proposal in for another city! You can also look at the winning proposal for the city that hosted Flock 2015. If you want to send in a bid, two important things to focus on are cost and convenience. One popular feature of the 2015 site was that the hotel and convention center were in the same building. This would be a major boost to any 2016 bid.

If you are interested in bringing Flock 2016 to your city, you can find information on that process on the Fedora Wiki.

Send any questions to, the official mailing list for all planning and coordination for organizing Flock every year.

The post Last call for Flock 2016 bids appeared first on Fedora Community Blog.


Fare Thee Well ROC City

It is with both a heavy and hopeful heart that I am here to tell you the news Rochester. I am proud of what has been accomplished here, what has been built here--personally and professionally--and I am going to miss you very much when I move to Raleigh in early 2016.

This will come as a shock to some of you who have seen me as more-or-less a fixture in the Upstate NY Hacker community for well over a decade now (Sans a few campaign seasons in the Capitol and NYC.) For others, it will be no surprise at all.

I remember the moment it changed for me. It was really nothing all that monumental, all I did was take my employee badge and swipe through a turnstyle, but I might as well have walked through the door in the back of the wardrobe. It was the first time that I had felt like a Red Hatter, walking into Red Hat. I felt like I belonged. It felt like home.

I cannot wait to open the book on this new chapter. Thank you ROC City, you will always have my heart.

Fedora 21 End Of Life on December 1st

With the recent release of Fedora 23,  Fedora 21 will officially enter End Of Life (EOL) status on December 1st, 2015. After December 1st, all packages in the Fedora 21 repositories will no longer receive security, bugfix or enhancement updates, and no new packages will be added to the Fedora 21 collection.

Upgrading to Fedora 22 or Fedora 23 is highly recommended for all users still running Fedora 21.

Looking back at Fedora 21

Fedora 21 was the first release to ship the Workstation, Server and Cloud editions when it was first released in December 2014. Fedora 21 Workstation was when users could first test GNOME Wayland, which is slated to be the default display server come Fedora 24. On the Server, Cockpit and Rolekit both made their debut in Fedora 21, and Fedora Cloud introduced the Fedora Atomic Host for the first time in Fedora 21.

Screenshot of Fedora 21

Fedora 21 Workstation

About the Fedora Release Cycle

The Fedora Project provides updates for a particular release up until a month after the second subsequent version of Fedora is released. For example, updates for Fedora 22 will continue until one month after the release of Fedora 24, and Fedora 23 will continue to be supported up until one month after the release of Fedora 25.

The Fedora Project Wiki contains more detailed information about the entire Fedora Release Life Cycle, from development to release, and the post-release support period.

Celebrating Software Freedom Day 2015

“Software Freedom Day (SFD) is a worldwide celebration of Free and Open Source Software (FOSS). Our goal in this celebration is to educate the worldwide public about the benefits of using high quality FOSS in education, in government, at home, and in business — in short, everywhere!”

Each year, Software Freedom Day (SFD) events are organized by volunteers in dozens of cities worldwide. These events each take on their own character, but typically involve some sort of address from the organizers, demos and presentations from community members, and installfests and hackathons. Each venue and organizer will have their own agenda and timeline, and there were specific things that Fedora pitched to local attendees at a number of events this year.

You can read more about the competitions and activities that attendees participated in here on the SoftwareFreedomDay page on the Fedora Wiki.

Fedora Badge-a-thon

For 24 hours over the weekend, participants had a chance to compete for cred, glory, and swag, to see who could accumulate the most Fedora Badges.

“Fedora Badges is a fun website built to recognize contributors to the Fedora Project, help new and existing Fedora contributors find different ways to get involved, and encourage the improvement of Fedora’s infrastructure.”

Participants who weren’t sure where to start were pointed toward Ralph Bean’s expertly crafted Fedora Sorting Hat at — a great place to match the things you are interested in with the activities that Fedora needs help with. In proper FOSS fashion, this resource is a fork of the asknot project originally built and used by our good friends over at Mozilla for their own contributor sorting hat, available at

Photo of Fedora Badge-a-thon Winners

Left to Right: Aiden Kahrs, Justin W. Flory, and Eric Siegel

Justin W. Flory is a first year Network Security and Systems Administration major at Rochester Institute of Technology. Outside of his course of study, he is a member of Nexthop, and RITLUG. Justin runs Fedora workstation on his Desktop and his laptop, and also has an old netbook he uses to run Fedora Server.


baby-badger icon badge-muse-badge-ideas-i icon riddle-me-this icon curious-penguin-ask-fedora-iv icon speak-up! icon bona-fide icon senior-tagger-tagger-iii icon senior-package-tagger-package-tagger-iii icon tagger-tagger-ii icon master-editor icon package-tagger-package-tagger-ii icon junior-tagger-tagger-i iconjunior-package-tagger-package-tagger-i icon associate-badger-badger-1.5 icon crypto-panda icon


Badges Total: jflory7 has earned 33 badges (11.3% of total)..

Q: What was your Badge-a-thon Strategy?
“For the most part, I went through I was eyeing
the writing section, and I emailed the mktg list about contributing to
magazine. I also did a bunch of package tagging. I was in the wiki workshop at
FLOCK, and I picked up where we left off on the last day, cleaning up from the
migration in 2008, recategorizing pages, stuff like that.”

Q: What is your advice to future participants?
“The Biggest thing, “I don’t know how to write code” doesn’t mean you cannot
contribute. There are plenty of ways to help: testing builds, writing
articles, tagging packages. Just because you can’t write code, doesn’t mean
you are worthless, and it is not as scary as it might seem.”

Aidan Kahrs is a 2nd year Network Security and Systems Administration major at Rochester Institute of Technology. Outside of his course of study, he is a member of the RIT Linux User’s Group (RITLUG). He runs Fedora workstation on his laptop, and another machine in his home.

Badges Total: abkahrs has earned 11 badges (3.8% of total)

Q: What was your Badge-a-thon strategy?
“Because I’m new, I wasn’t sure what to do. I started by making a proper
wiki page, and tried to contribute in ways that wouldn’t break anything. I
was nervous about what I was doing.”

Q: What is your advice to future participants?
“I would say is a really useful site. I liked
that that existed as a way to find out where to start doing things. There were
a couple of wiki pages too that had info on how to start contributing to the

Eric Sigel is an alumni of the RIT Center for Multidisciplinary Studies, who concentrated in Computer Engineering and Computing Security. He was also a member of the RIT Robotics Club, and NextHop. Eric’s main desktop is a Gentoo machine, and he just reinstalled Debian on his netbook. He has another fileserver at home that is also Debian, and he runs Arch on his laptop, connected to the internet via OpenWRT on his router.

crypto-badger iconcrypto-panda iconinvolvement iconmugshot iconparanoid-panda iconwhite-rabbit icon

badges total: nticompass has earned 6 badges (2.1% of total)

Q: What was your Badge-a-thon strategy?
“I started looking through the badges, trying to find out how to contribute to websites. Through I found “hey look, website stuff. I can write Javascript.” It wasn’t a normal HTML file, there was a compiler that generated docs, so I got started looking into how to get involved in website team. I was mostly hacking on my router, making it into a filesharing service, experimenting, because I could.”

Q: What is your advice to future particpants?
“I’m working on becoming a Fedora dev. Work on something you can hack on. Fix a
typo. Add a comment. You can push to just about anything, just find something.
There is enough out there.”



Software Freedom day is an annual world-wide celebration of Software Freedom that you can participate in next year. Thank you to our organizers and participants on campuses and cities around the world. If you have a story or photos from your local event, or if you want to organize a Fedora-specific event in your region, then just drop us a line at commops@lists.fp.o.



Everyonceinawhile, you realize you've been brute forcing your way around some tool that you use every day. Too often, for me, that tool ends up being vim...

While I was mass editing the transcripts I used to create the FSF30 wordclouds, I realized I was doing too much manual movery to get to the next misspelled word. In a moment of clarity, I was like "hey, I bet vim has a way to properly do this!" And of course it did!

]s = move cursor to next misspelled word
[s = move cursor to previous misspelled word


The word 'harry' is a proper word, and the word 'potter' is a proper word, but if they are right next to eachother, likely you are not talking about someone persistently accosting a claysmith...

Luckily, vim will let you visually select the words together 'v2e', and then type 'zw' to mark the duo as a "wrong" word. You can then add the correctly capitalized duo of "Harry Potter" to your local dictionary by typing 'v2ezg'.

This will make "harry the potter" not show any misspellings, but "harry potter" will come up as misspelled, and when you spellcheck it, will give the proper caps.



Regular readers may recall previous wieldings on decauseblog of a powerful text visualization tool at our disposal in the CommOps Toolbox--the oh-so-fantastic word_cloud library by amueller


This past week, I ventured forth to "Beantown" to attend the FSF's User Freedom Summit, and 30th Anniversary Celebration. As expected, it was stellar and chock-full of Free Software Standard bearers, friends, and recent joiners.

You can find the raw transcripts from my time at the event, as well as the conglomerated adverts/blogposts used to generate the clouds below, in my decause/raw repo.

Here's just the keynote-y goodness by Eben. Below is an entirely partial transcript. It is incomplete, and is best accompanied by the video above, hosted via FSF's MediaGoblin Instance:

GNOME Accessiblity Develper's Guide: User Interface Checklist

Teaching Through Testing

This is just a list. A check list. You do not need to be a UI/UX expert, you just need to be a user, and you can contribute substantially by just walking through this list sometime, and filing tickets against bugs. You can learn so much about UI/UX just by reading this list. Do it at least once. You will level up so much so fast if you can stick to models like this.

/me <3's the GNOME so much.


This is where my day started:

Happy to do it again this year with Bill Bond and Prof. Jacobs in Kenn Martinez's Software Engineering Freshmen Seminar Course.

Polling the Room:

  • How many of you are freshmen? 100%
  • How many of you have heard of Open Source? ~80%
  • How many of you have a GitHub account? >50%
  • How many of you have heard of George Clinton? 0%

I'll be presenting on Thursday as well. Stay tuned!



This post is in response to the article "The Hacker Hacked" by Brett Scott:

"In this context, the hacker ethic is hollowed out and subsumed into the ideology of solutionism, to use a term coined by the Belarusian-born tech critic Evgeny Morozov. It describes the tech-industry vision of the world as a series of problems waiting for (profitable) solutions."

I have mixed feels about this one. Brett's article is a good one, and has some key pieces of history, and proper shout-outs to the giants who's shoulders we stand on. Props Brett. I, however, think I'm just super biased and jaded, and all like "uh, yeah, obvs, duh." I'm a Hackademic, and perhaps a lil bit too familiar with much of this. At the same time, I'm *very* excited for the mainstreaming of hackery, and defanging the old "ski mask" tropes about hackers. I don't see this as a bad thing, or a risk, because of the fail-safe (we hope) of transparency. When SV startups think they are "disrupting" or "saving the world" by creating yet-another-shitty-app, they are going to get the "validation" they deserve when no one buys their product or contributes to their community. From what I can tell, Hackers don't want to solve other people's problems that don't need solving, they want to make progress.

I think we're seeing a super influx of /potential/ legit hackers, and as much as I love a good retelling of the story of Mel as the next guy, the old-guard elitism needs to leave room for the next gen to learn, even if they are starting from a softer more sterile place than the copper-age hackers did. I think that the thing that makes a hacker a hacker is that curiosity, to see how far the rabbit hole really goes, and to be unafraid to trace a thread all the way down the stack to see where it starts. Everyone has to start somewhere, and with the right exposure, even the "yuppiest" of webdevs can find their way to the core of empowerment. The transformation happens not because of the movies, or job prospects, or gadgets, but because of the autonomy. Learning to solve your own problems is a compounding virtuous cycle that ultimately causes you to question every other cycle not focused on rapid iterative improvement. You can get into that cycle whether you are building shitty-apps, or world changing FOSS code, the production process requires openness, and honesty. The important things are that the infrastructure remain neutral, and that copyleft licensing remain the standard.

We'll look back on IP policy of today the same way that the people who used looms to create textiles look back on hand-stitching, or the way that proponents of the locomotive regarded critics who said that speeds above 30 MPH would cause bodily harm. These machines and processes too, were at a time, a threat to the status quo, and subversive, and unnatural, and, and, and... but at some point they became the new standard, which is the era I reckon we are approaching--exemplified by recent developments like the Open Sourcing of certain closely held programming languages and environments, and bottom-line concerned organizations like Wal-mart and Capital One embracing open development practices, and even contributing back upstream in some cases. "Gentrification" and "appropriation" are great sensationalization SEO words, but words like "synthesis", and "merge commit" are more what comes to my mind when talking about the mainstreaming of hacking. Perhaps I'm not being romantic enough?

I'm not afraid that all the "real"* hackers are going to be displaced or that there is no longer a place for merry pranksters. When the source is open, and the platform accessible, and the pipes neutral, and the power transparent, tricking someone into doing something against their own interests is much more difficult. When resources are not allocated in an optimal fashion, it can be made clear, and whether you a suit or a rabble-rouser, waste is in no one's best interests.

(*: regular readers will recall my opinions on dictating hacker identity, and my heavy use of quotation marks here--you are a "real" hacker the moment you pop the hood and pick up the wrench, not when someone else tells you you are)



It's finally done, and I've half-way caught up on all the sleep I didn't get during the past week ;) I had a ball seeing all you Fedorans strolling around my regular haunts, and cannot wait to see you all again soon. May your travels home be safe and expedient!

Please be patient with me for the next day or so as catch up on my inbox and all the administrivia and loop-closing that comes along with hosting a conference in your hometown. Once that is taken care of, I'll be spinning up a proper recap and redux blog post for FLOCK 2015 (aka FLOCKchester).

While we wait for that, if you did attend the conference this year, you should

totally fill out the official survey with your feedback thoughts and suggestions




Another treasure trove discovered this week! Last time, it was all the tasks required to release Fedora. This time, it is the master list of all the packages in Fedora!!eleven!

I still cannot get over how cool it is for my $DAYJOB to be in a community that ships an entire operating system, and the stupendously grand scale on which that type of development occurs... Totally ridiculoustown...



Since I got here, I've been doing my best to create a mental map of all the parts of Fedora. From what I've gathered thusfar, there are 13 subprojects (See: wiki sidebar), along with a number of web properties, and a slew of upstream communities that Fedorans are tapped into. But even after getting the broadest sense of how many moving parts there are, that still doesn't explain HOW, only who. I've said to myself "Gee whiz, if only there were a list of all the things that needed to be done to ship a release..." Today, thanks to jzb, I have found the HOLY GRAIL of "how a Fedora becomes a release" and I'm here to share it with you too!



I am a license pluralist. I think Author and contributor choice of distribution is important, and we need to have as diverse a toolbox of legal instruments as possible, to fit as varied a field of softwares as we can conceive.


The "Freedom to close" is not a freedom.

The "Freedom to exploit" is not a freedom.

Your organization not adopting copyleft software, is not a failing of copyleft, it is a failing of your organization to participate authentically in an Open ecosystem of innovation. It is your organization saying "I want to keep my options open, in case there is ever a time when it would be convenient to cut you out of the equation."

Folks might say "Yeah, that is what doing business at arm's length is!" or "By nature, people are greedy and/or selfish!"

And that is exactly why we have contracts and licenses! We use them all the time in non-copyleft contexts to ensure that both parties hold up their end of a clearly defined agreement.

So why-oh-why is there some sort of virtue in entering into an agreement that would allow the other party to renege? The freedom to break a social or business contract whenever it is convenient, implies that you think you can outmanoeuvre the other party, and that your strategy's real strength is subterfuge, not what you bring to the table.

That is not what genuine collaboration is. It is an okey doke, and contributors are falling for it left and right against their own interests.

CommOps Toolbox


I can build, but I am certainly not the fastest. Slowly but surely, with help from lmacken and threebean and qalthos, I've been loading up the Commops toolbox.

CommOps Tools in the Box


One of the first data visualizations I worked on after being hired, this was one of the fruits of the PyCon 2015 Sprints. Cardsite displays fedmsg activity as a 3 part grid, with the top pane being messages, the middle pane being users, and the bottom pane being packages. It creates the cell if the message is new and hasn't been seen before, or, if it has been seen, updates the message count, and does a fancy animation. It is a proof of concept, and not what I would call "complete," but it has a number of merits:

  1. Inspired by the EmojiTracker project, that shows real-time usage of Emoji on Twitter
  2. Gulp/Bower to install JavaScript/CSS dependencies in a programmatic way
  3. Deployed to GitHub via gh-pages branch:
  4. Uses Websockets to provide real-time fedmsg updates to the page
  5. Uses the semantic-ui framework for the front-end


The idea is, take an RSS feed, or a list of RSS feeds, and generate a word_cloud for each feed, and the aggregate content from all the feeds, like so:

python config.json

with config files that look like this:

    "feeds": [
    "mask_filename": "fossboxlogo-mono.png",
    "output_dir": "feedcloud",
    "output_image": "feedcloud.png",
    "stop_words": ["http", "https"],
    "each_corpi": true,
    "max_words": 1000
  1. Individual word_clouds for each rss feed AND the aggregate cloud of all posts combined!
  2. Uses a config file written in json for easy parsing
  3. Once it is set up, it is very easy to create multiple configurations to create many visualizations with ease.
  4. easy to use blacklist within config file for easy tweaking.


  1. word_cloud is a very powerful, but also very heavy-weight stack to stand up and run. After many attempts, I finally got my first virtualenv working, and I've been using the same one ever since in all my word_cloud related projects :/


Arguably the most complete of the tools in the box, it builds upon the work of feedcloud and word_cloud to deliver fancy wordcloud visualizations via Twitter each time an IRC meeting ends and sends the logs message across the fedmsg bus! This tool is "deployed" on my machine locally as a systemd service, but will hopefully be deployed to Fedora Infrastructure in the near-ish future (packaging the word_cloud stack for Fedora is low on the priority totem pole for me personally, but def a goal I'd like to deliver on in the next 120 days (but probably not the next 30/60/90.) You can glance the fruits of wordcloudbot's cycles on the Fedobot twitter account.


fedora-stats-tools is a breeding ground for interesting one-off scripts and tools. It is the default place where various CommOps experiments are being pushed to, before they end up in their own repositories. Here is what we've got in there so far:


    This is a very basic script that uses the requests library to get the raw json for fedmsgs over the past year, and pulls out the final field of that raw json response, which includes a total number of messages.


    A jinja2 template that creates links to meetbot activities for each subproject. This is an "artisinal" solution we're using to aggregate datagrepper information prior to the completion of hubs. The challenge is to *not* engineer something, but write as light-weight of a tool as possible to give us raw data. There is also a cronjob running locally on threebean infrastructure to send me a daily reminder to compile the stats from the information gathered by this template.


    A descendant of meetbot-fedmsg-activity, this jinja2 template takes lists of URLs, if they are not empty, and generates a "daily briefing" with things like action items and links generated by meetings and captured by zodbot. Ideally, these lists will be compiled by using BeautifulSoup in the near future, so that every day, the briefing can be shipped without human interaction (and perhaps tweeted by fedobot, but that is another project for another time.)


This particular tool and its purposes was covered at length already in a previous post:

Why a toolbox?

Well, I wanted to include a bit of this in my last post, but for now, here is the gist: The position of Fedora Community Lead has not existed before I had the privilege of filling the slot. There are *many* duties and responsibilities, defined by many stakeholders, and right now, I'm a one-man army looking to deliver on each of them. "Linus does not scale" and neither does decause. I'm working on making each of my contributions "autonomous" so that they do not block on a single person, and can invest the labor needed to deliver on these metrics once, and the deploy/redeploy multiple times as needed. It has been somewhat slow going, but it is my hope that each of these tools can help each problem space or bucket within the CommOps ecosystem of initiatives (see: for the full proposal, and/or contribute your thoughts on the wiki here,


Metrics is where we've started. It has been a small core of folks working, but the spirit of the Community Operations team should start to become more clear as these types of tools get built. I'm still learning my way around this massive community of FOSS hackers that is Fedora. I still don't know where all of the corners are, or who all the people that make each of those legs gallop with the whole are, but I sure want to. Someday (150ish days from now) there will be FedoraHubs, which is really the ultimate manifestation and incarnation of Community from the amazingly-sleek-and-powerful real-time infrastructure that is powered by the fedmsg ecosystem, but until then, things are going to have to be a bit more manual until we can get there. If you'd like to help tell the story of your particular corner of Fedora, or would like to help provide the "10,000 foot view" on the project's contributor base, then by all means please reach out in IRC (#fedora-apps on freenode) and let me know. I'm here to help, but I sure could use some too :)



Hi There! It has been a busy couple of weeks! Thankfully I'm done travelling for a little while (and hopefully done with RedEye flights forever too... /me is getting too old for that bizz)


The Fedora Release Engineering FAD (RelEngFAD) gathered 15+ contributors from across a number of projects and teams to discuss the future of "shipping bits to users." As I am still new to this side of the project, I was able to learn more about how this ecosystem operates over those five days, than I had osmosed over the past five years. Etherpad Link

I've got a bunch of inbox and administrivia to catch up on for now, so stay tuned for more granular updates to come!


After upgrading to Fedora22 last week, I've had to sand down a few rough corners. One of them which has cropped up was that some of the tracks on Soundcloud just would not play while I was in Firefox. I thought it was likely because user-generated content is often prone to take-downs, spurious or not, and just kinda shrugged it off. But after trying to play a few particularly popular tracks from confirmed accounts, I decided something must be amiss. I popped open the code inspector, enabled the console, and right off the bat was greeted with a clever recruiting message from Soundcloud:

"You like to look under the hood? Why not help us build the engine?"

Thereafter, I would get "Media resource could not be decoded" error messages followed by long strings representing the tracks.

Through some search-engine-foo, I was able to dig up this ticket: In it, it is explained that there was likely an issue with codecs not being installed, with Ubuntu specific instructions. After originally enabling the RPM Fusion repos, I was able to search for and install gstreamer1-libav codec, restart Firefox, and play the broken tracks.


Happening right now, mattdm is doing an AMA on which can be found here:

I got name dropped a couple times (this is still pretty surreal :P), and here is my response to one of the comment threads on Metrics/New Contributors:

This is one of the things that we hired Remy D to work on

Hi there /r/linux!

This is a great question, and one that members of the Fedora-Infra team have spent the past year building tools and gathering data to answer. The fedmsg project, along with tools like datagrepper, have been collecting stats on developer and community contributions within Fedora, and feeding those stats into Fedora Badges to quantify, recognize, and promote activity. Everything from git commits, wiki edits, IRC meetings, blog posts, package builds (and fails), conference/event participation, all kinds of public activity is being published in real-time on the fedmsg bus! I even have a GNOME Shell extension installed pops-up desktop notifications whenever messages related to my favorite hackers or packages go over the wire :)

From this fire-hose of data we can surface correlations between types of messages, and message patterns as they relate to specific phases of the release cycle (or other timelines for that matter) to make informed decisions of how best to prioritize and publicize action.

Where do new contributors come from?

I'm pretty new to this role in Fedora, but I've been studying and organizing FOSS communities as a Hackademic for some time now. Here is my (wholly unoriginal) take on this: It starts with the task, then the people, then the idea.

This model for organizational development doesn't just play out in FOSS, but in all types of communities of practice. At first you show up because you need to accomplish something. You have an itch to scratch. In the case of a work-for-hire relationship, that itch may be "I need to pay my bills," but in FOSS it is usually, "I need a tool to do a task," paid or not.

You start there, maybe from scratch, or more likely by taking something that works and adjusting it to fit your use-case, with help from people who came before you. Those who helped you are likely people solving problems you are interested in solving, and the more you work together, the faster you can complete the tasks you set out to accomplish. You help them, they help you, and the virtuous cycle is off and running :)

Once you've established a working relationship with the people, you are now part of something larger. That larger something--whether it is a company, or a hackerspace, or a common goal or cause or idea--is the thing that eventually motivates you to stay and continue contributing.

New contributors come for the task, but stay for the community.

Our problem is there is so much more work than there is people who can do that work. New contributors don't emerge from the womb ready to start hacking. We (Fedora and FOSS-at-large) must support and cultivate an entirely new base.

I've helped a decent amount of new contributors get started through my work at RIT, which has mostly been about equipping them with tools in their toolbelt to do certain tasks. Once a new contributor feels the empowerment that comes from solving their own problems, they usually find their way to people and places where those types of problems are getting solved, FLOSSophy or not.

From what I've seen, new contributors come not just from working with the best tools for the job, but from having a positive place to experiment and learn (and teach!) about using them.


Someone posted to the Fedora-Join list this week with questions about Fedora Cloud, and cloud in general (see: Mattdm, Fedora Project Lead, wrote a *fantastic* response that I think warrants resharing below.

Your confusion is understandable, because "cloud" has become a
marketing buzzword and is often applied to the things you've described.
But that's not quite what we mean here.

"Cloud computing" is the idea of providing the fundamental compute
resources — cpu cycles, storage, and memory — as a service. In this
model, rather than having these things on-site in a server room, you
pay a metered price to a utility company. "The cloud" in this sense is
like the grid from which we draw electric power.

Instead of energy companies, the providers are Amazon (EC2), Google
(Compute Engine), Microsoft (Azure), Digital Ocean, and others. And
this isn't science fiction — if you have a startup (especially one
where you want to try new things and may need to scale up (_and down_!)
quickly, this is how things are done now. And, it's appealing to large
companies as well, where an on-premises cloud based on something like
(open source!) OpenStack lets your IT department become an in-house
utility provider to your developers.

So: Fedora Cloud is simply an instance of Fedora optimized to run in
this environment. If you go to
and click on one of the Amazon EC2 images at the bottom of the page,
you can launch a new remote Fedora system in minutes. And, you can use
this as a sort of self-service hosting provider, if you like, but the
important thing is that you can actually also access all control of the
machine via an API, which means you can build systems that
automatically scale up (and again, down) as needed.

But, at this point, Fedora Cloud is just that basic building block, and
doesn't actually provide any cloud-based _services_. For that, one
approach is to use popular container technology Docker on top of Fedora
Cloud — read more in this recent Fedora Magazine article:

and take a look at 
for practical examples, which includes among other things OwnCloud (a
remote storage solution) and Wordpress (the blogging software, of

Matthew Miller

Fedora Project Leader


Since I'm kinda new here, I still spend much of my mental energy thinking about how to FCL, and wrapping my head around this wonderful bazaar that is Fedora. One of the best parts is getting to enjoy "Fedora-Firsts" on a daily-basis.

Today's firsts were my first non-decause specific wiki page, and first time chiming in on the Fedora-Join Trac.

I spend a lot of mental energy thinking about "HowFOSS", but even moreso "WhyFOSS?" So when I got pinged in #fedora-join today (an up-and-coming new contributor onboarding group within Fedora), I was *EXXSTATIC* when I found Join-SIG Trac Ticket #10, proposing a "FLOSSophy" contest.

I added my thoughts, which were well received so far, and have created a wikipage here for others who want to chime in too:

This is still a very early-on idea, but it won't be for long, as the Join-SIG would like to propose it to the council in the very near future.

I'm likely on the hook for providing a version of my "WhyWeFOSS" as an example, so stay tuned for that post in the near-ish future.

UNICEF Innovation Centre and Fund Launch

The raw-but-full transcript from my recent visit to the United Nations for the Launch of the UN Innovation Centre & Fund can be found in my decause/raw repo on GitHub, but here are a few choice quotes, and a rough word_cloud mask.

When I first started 16 years ago, I asked one of my colleagues, how does it work? Innovate, Demonstrate, replicate, advocate. I've followed that ever since. You can't do it just once, you gotta do it all the time.
David Morely, President & CEO, UNICEF Canada
"If you give voice to the common people, they will share their ideas generally. Much more knowledge exists in Open Source... Children are not a sink of resources, but a source of innovation.
~Prof. Anil Gupta, Founder of Honeybee Network
There is a saying that You can count the seeds in an apple, but you cannot count the apples in a seed. We're hoping to build tens of millions of apples in the future.
~Ambassador HAHN Choong-hee, Deputy Permanent Representative of the Republic of Korea to the UN in New York.

More Harm than Good svpino

This blogpost has been making the rounds on the Social Medias as of late:

I'm sure many of my hacker peeps read this article and thought to themselves "YASSS! This guy is so totally right!!1! Why n00bs gotta n00b! RTFM, Br0!"

This stuff is toxic, will not help onboard new devs, or improve the habits of careless devs. I feel your pain, but this isn't going to actually solve your problem, but conflate it by creating a hostile environment. This is a classic case of #doingitwrong

I understand that the author is frustrated with sub-par candidates applying for positions they are not qualified for, but this kind of: "You are not allowed to even identify as a ________ until you meet my list of arbitrary requirements" is dangerous.

Yes, it would be nice if everyone applying to your open position could solve these problems, but that group of people you are bemoaning here, those who are irreverently spamming their resumes everywhere, are not going to care about this post, or what you think qualifies them. They are just playing the job-lottery...

You know who will care and take it very seriously? All the aspiring programmers and software engineers, who you just told were not good enough to even call themselves or even think of themselves as a programmer... Why would you want to work with people who don't think you are good enough or worth it?

You can say someone is not qualified for a position, but please don't try to tell someone who they are, or how they should think about themselves. Everyone has to start somewhere. Programmers do not emerge fully-formed from the womb--they learn from others, and from failure.

Proposal: CommOps for Fedora?

Community + Operations = CommOps

The rise of DevOps has been swift. Sysadmins are increasingly instrumenting and integrating automated systems to stand up and maintain their infrastructure. This same approach can be taken to support community infrastructure in a distributed and automated fashion, that doesn't force people to choose between using their precious volunteer time to "build things" or "build communities that build things."


One person, Lead or otherwise, cannot possibly know everything that is happening in every corner of a project the size of Fedora, let alone where each of those sub-communities would like to go in the future. To do this, we'll need broad participation across many teams and communities. I would propose a delegation, rather than an elected board, or other top-down style governance structure, be the vehicle through which to gather input and reach consensus on community infrastructure. Delegates will represent distinct groups within Fedora, selected from within their delegation, with additional input/participation by non-voting delegates who want to be involved.

Delegations include:

  • The 13 Fedora Subprojects
  • The five working groups (three Editions, plus Base and Stacks/Env Working Groups)
  • Any active and interested SIGS (will be opt-in)
  • Distinct web properties without a team/committee/group (Ask.fp.o maybe falls into this category?)
  • Other moving parts of Fedora that I have not yet identified, but should have representation

Operating Principles:

  • Instrument activity in existing communities to create and track metrics (a good initial effort exists at
  • Federate and syndicate with as little burden on contributors as possible (like middle-ware that wraps and pipes existing process/activity)
  • Community engagement and outreach is something *everyone* in Fedora should be concerned with and invested in, not just Ambassadors or Marketing.

Techincal Strategy:

  • Use real-time communication channels and infrastructure when possible (Fedmsg, FMN, Zodbot, others)

Meeting Format:

I would like to adopt the ticket strategy that is used by Design Team, resulting from their latest FAD,, which is ticket-driven meetings, with open-floor at the end.

  1. Tickets that don't get requests for information responded to after 2 weeks, become inactive.
  2. Tickets that are stalled for 2 weeks either get unassigned, or can be renewed for an additional two weeks by their owner.

Things that the Fedora Community Operations (CommOps) Team helps with:

  • Unified Messaging. It is my hope that when someone asks the question, "What is Fedora?" to an existing community member, *everyone* will have at least a standard elevator pitch, whether you are a designer or engineer or translator. Ideally this is going to be informed by the Fedora Core Values and Mission, and developed in the open similar to the Red Hat Mission Statement. Much input from existing groups (such as marketing and design) will be needed.

  • Curating a queue of "stories." Much in the spirit of, the idea of "Cover Posts" which can be generated from existing content, and point to those existing parts of Fedora to minimize the burden of "publishing in yet another place." Content that is highly designed and curated already (announce-list, Fedora Magazine) should get the "greenlight" to be published automatically, and others added to a curated content queue from the community by Zodbot, mail-list, Fedmsg, and/or other means. This queue of curated content will help feed both Fedora Magazine (end-user focused content) and a here-to-for undefined Community/Contributor Outlet (perhaps a council or CommOps blog?)

  • Badges Requests. To help direct contributor activity, the community team will help existing sub-projects come up with badges, and series' of badges, to establish an official process and credential for team/subproject membership. The badges *design* process is operating very well, but the badges *strategy* process falls onto the design team's already full plate. Let's fix that.

  • New Contributor Onboarding via Fedora Hubs. This is an existing effort, with momentum, and full support of the design team, and buy-in from the infrastructure team. I am *thrilled* to not have to create or recreate this wheel, and want to support Hubs as the community team's official strategy. The gist is: "The point behind the idea was to provide a space specifically for Fedora contributors that was separate from the user space, and to make it easier for folks who are non-packager contributors to Fedora to collaborate by providing them explicit tools to do that. Tools for folks working in docs, marketing, design, ambassadors, etc., to help enable those teams and also make it easier for them to bring new contributors on-board." Proposal here: and results here:

  • Wiki. The wiki is aging. The wiki tries to be all things to all Fedorans. There are a number of initiatives happening (I've heard Pete Travis is moving User Docs out of the wiki into a style site, pfrields says there is an {{old}} tag that is going to help us sift through content, and there are likely other initiatives too) We'd like to do things like automatically generate User pages on the wiki (in the spirit of the badges template) so that users don't have yet-another-place-to-edit.

  • Internal Communications. This is an ongoing and difficult problem, and we have come up with an approach, but it does resemble the so-far-proposed structure of FOSCo. Each of the 13 official subprojects, active and interested SIGS, working groups, and each web-property (Ask, Magazine, etc...) can choose a delegate. Since this is a *massive* synchronous effort, we will need a way for each delegate to report on behalf of their delegation via a template. That template will be ticket driven. Creating zodbot hooks to fill in this template from existing IRC meetings will solve this in many cases, but not all. Having a method to manually submit reports will help as a fallback.

  • Perhaps Code of Conduct and Diversity may make sense to fall under the community team as well. The new Diversity Advisor (search committee is forming now) will likely be interested, if not be the owner of this aspect of the community team.

  • Metrics. Because of the Fedmsg stack, we have some very detailed raw data on Fedora contributor activity. There are a number of efforts being undertaken to generate data visualizations and regular reports based on this raw data. A critical part of developing metrics will be defining what kinds of questions we want to ask of this massive store of raw data.

  • Other things I didn't think of (which is likely many)

That Rugged Raw

It has been a more-busy-than-usual week on campus, and I've had a pretty packed conference schedule. I've been much too far behind on New Year's Resolution:

Ship Copy.

So, in it's most-purest most-rawest most-honest form, you can find a number of raw transcripts in the new repo: decause/raw

So, What does that look like?

├── libreplanet
│   └── 2015
│       ├── closingkeynote-libreplanet-karensandler.txt
│       ├── debnicholson-friday.txt
│       ├── freesoftwareawards.txt
│       └── highpriorityprojs-libreplanet-friday.txt
├── pycon
│   └── 2015
│       ├── ninapresofeedback.txt
│       ├── pycon-day2-keynotes.txt
│       ├── pyconedusummit2015.txt
│       └── scherer-pycon-ansible-day2.txt
└── RIT
    └── 2015
        ├── biella-astra-raw.txt
        ├── biella-molly-guest-lecture.txt
        └── molly-sauter-where-is-the-digital-street.txt

6 directories, 13 files

$ wc -w libreplanet/2015/*.txt
 2287 libreplanet/2015/closingkeynote-libreplanet-karensandler.txt
  139 libreplanet/2015/debnicholson-friday.txt
  586 libreplanet/2015/freesoftwareawards.txt
 2297 libreplanet/2015/highpriorityprojs-libreplanet-friday.txt
5309 total

$ wc -w pycon/2015/*.txt
  106 pycon/2015/ninapresofeedback.txt
 2233 pycon/2015/pycon-day2-keynotes.txt
 1844 pycon/2015/pyconedusummit2015.txt
   63 pycon/2015/scherer-pycon-ansible-day2.txt
4246 total

$ wc -w RIT/2015/*.txt
 4521 RIT/2015/biella-astra-raw.txt
 2489 RIT/2015/biella-molly-guest-lecture.txt
 1964 RIT/2015/molly-sauter-where-is-the-digital-street.txt
8974 total

18529 total total

18,529 words, or, just over 41 pages total of raw text.

There is a flaw in my workflow. Though there is some utility in a raw transcript, really it is mostly when delivered in real-time. After the fact, there is much post-production work to be done, like spell checking. Even after, if there is a video, then the transcript is partial, and incomplete. This is bothersome to many potential downstream consumers of raw text. So where does that leave us?

Word Clouds

I've played with word_cloud before within my decause/presignaug for building presidential inauguration visualizations last year. Since then, word_cloud has gotten much more sophisticated--now using scikitlearn, and numpy, and providing the ability to fit word clouds within images!

List of Issues/Fixes

  • you'll need to pip install cython first
  • You'll need to sudo yum install freetype-devel (probably not necessary, since this is alleviated by pointing at a diff .ttf typeface...)
  • you'll have to edit your FONT_PATH within
  • image masks *must* be saved as greyscale, not rgb images (this was a biggie, and I wouldn't have figured it out if GIMP didn't display the color encoding in the file statusbar when you opened things :) )
I went ahead and uploaded the changes I made to my fork on GitHub: if you'd like to see them. The important files are and

Hackademics on Campus

I'd love to give a proper reckoning of the past 48 hours, but it would take a not-so-insignificant feat of editing...
[decause@chapeauxrouge story []]$ wc -w biella-*
 4521 biella-astra-raw.txt
 2486 biella-molly-guest-lecture.txt
 7007 total
7007 words total

Enter the Soylent

Thank you everyone for your input and advice on my previous post about Soylent. Folks seem pretty divided on this particular lifehack, and now I have a much better idea of why, and what the shortcomings of Soylent's current formula are.

Though there is much to be desired in the 1.4 version of Soylent, it is better than the 1.0 version of Drive-Thru nutrition that I've been apt to subsist upon as of late.

I've only got enough for a couple of days, but that should be enough to know whether or not I can stomach it. So far, I've lucked out and I don't hate the taste, though I did wait until I was *rilly* hungry before "eating" to trick my body into enjoying it more ;)

Meal #1 was a success, and I'm bringing Meal #2 with me today to campus.

Fedora 22 Beta Release!

Fedora 22 Beta Release Announcement

The Fedora 22 Beta release has arrived, with a preview of the latest free and open source technology under development. Take a peek inside!

What is the Beta release?

The Beta release contains all the exciting features of Fedora 22's editions in a form that anyone can help test. This testing, guided by the Fedora QA team, helps us target and identify bugs. When these bugs are fixed, we make a Beta release available. A Beta release is meant to be feature complete and bears a very strong resemblance to the third and final release. The final release of Fedora 22 is expected in May.

We need your help to make Fedora 22 the best release yet, so please take some time to download and try out the Beta and make sure the things that are important to you are working. If you find a bug, please report it – every bug you uncover (and/or help fix!) is a chance to improve the experience for millions of Fedora users worldwide.

Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as feasible, and your feedback will help improve not only Fedora but Linux and free software on the whole.

Base platform

  • Faster and better dependency management: Yum has been replaced with dnf as the default package manager. Dnf has very similar command line options and configuration files compared to yum but also has several major internal changes including using libsolv in coordination with friends from the openSUSE project for faster and better dependency management. dnf-yum provides automatic redirection from yum to dnf in the command line for compatibility. The classic yum command line tool renamed to yum-deprecated as a transitional step for tools still using it.

Fedora 22 Cloud

The Fedora 22 Cloud Edition builds on the work completed during the Fedora 21 cycle, and brings in a number of improvements that make Fedora 22 a superb choice for running Linux in the cloud.

Ready for the Fedora 22 release, we have:

  • The latest versions of rpm-ostree and rpm-ostree-toolbox. You can even use rpm-ostree-toolbox to generate your own Atomic hosts from a custom set of packages.

  • Introduction of the Atomic command line tool to help manage Linux containers on Atomic Hosts and update Atomic Hosts.

Fedora 22 Server

Fedora 22 Server Edition brings several changes that will improve Fedora for use as a server in your environment.

  • Database Server Role: Fedora 21 introduced Rolekit, a daemon for Linux systems that provides a stable D-Bus interface to manage deployment of server roles. The Fedora 22 release adds onto that work with a database server role based on PostgreSQL.

  • Cockpit Updates: The Cockpit Web-based management application has been updated to the latest upstream release which adds many new features as well as a modular design for adding new functionality.

  • XFS as default filesystem. XFS scales better for servers and can handle higher storage capacity and we have made it the default filesystem for Fedora 22 server users. Other filesystems including Ext4 will continue to be supported and the ability to choose them have been retained.

Fedora 22 Workstation

As always, Fedora carries a number of improvements to make life better for its desktop users and developers! Here's some of the goodness you'll get in Fedora 22 Workstation edition.


  • The GNOME Shell notification system has been redesigned and subsumed into the calendar widget.
  • The Terminal now notifies you when a long running job completes.
  • The login screen now uses Wayland by default with automatic fallback to Xorg when necessary. This is a transitional step towards replacing Xorg with Wayland by default in the next release and should have no user visible difference.
  • Installation of GStreamer codecs, fonts, and certain document types is now handled by Software, instead of gnome-packagekit.
  • The Automatic Bug Reporting Tool (ABRT) now features better notifications, and uses the privacy control panel in GNOME to control information sent.


  • The Nautilus file manager has been improved to use GActions, from the deprecated GtkAction APIs, for a better, more consistent experience.
  • The GNOME Shell has a refreshed theme for better usability.
  • The Qt/Adwaita theme is now code complete, and Qt notifications have been improved for smoother experience using Qt-based apps in Workstation.

Under the covers:

  • Consistent input handling for graphical applications is provided using libinput library which is now used for both X11 and Wayland.


Fedora spins are alternative versions of Fedora, tailored for various types of users via hand-picked application sets or customizations. You can browse all of the available spins via Some of the popular ones include:

Fedora 22 KDE Plasma spin

Plasma 5, the successor to KDE Plasma 4, is now the default workspace in the Fedora KDE spin. It has a new theme called Breeze, which has cleaner visuals and better readability, improves certain work-flows and provides overall more consistent and polished interface. Changes under the hood include switch to Qt 5 and KDE Frameworks 5 and migration to fully hardware-accelerated graphics stack based on OpenGL(ES).

Fedora 22 Xfce spin

The Xfce spin has been updated to Xfce 4.12. This release has an enormous number of improvements, including HiDPI support, improvements to window tiling, support for Gtk3 plugins, and many improvements for multi-monitor support.

Issues and Details

This is an Beta release. As such, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in #fedora-qa on freenode.

As testing progresses, common issues are tracked on the Common F22 Bugs page:


While Fedora 22 is still under active development, we have a number of new features developed in parallel for Fedora 23 as well. While all of these features are work in progress and the plans have not been finalized, we want to highlight a few major changes expected and invite your early testing and feedback.

  • Wayland by default for Fedora 23 Workstation. XWayland will continue to be provided for compatibility with applications using X.

  • Python 3 by default for Fedora 23 Workstation: While most of the default applications are already using Python 3 in Fedora 22, Fedora 23 Workstation will only include Python 3 by default. Python 2 will continue to be included in the repositories.

  • A Vagrant image for Fedora 23 Atomic Host and Cloud Images. We're supplying Vagrant boxes that work with KVM or VirtualBox, so users on Fedora will be able to easily consume the Vagrant images with KVM, and users on Mac OS X or Windows can use the VirtualBox image.

For tips on reporting a bug effectively, read "how to file a bug report":

Release Schedule

The full release schedule is available on the Fedora wiki. The current schedule calls for a final release in the end of May.

These dates are subject to change, pending any major bugs or issues found during the testing process.


One of my students tagged me in an issue on GitHub about helping new contributors during the National Day of Civic Hacking. After spending some time writing the comment, I decided to repost it here :)

Orig thread here: reply here:

In the past, many local event organizers have had agencies reach out to them directly and offer to partner on specific projects and initiatives (our local events in Rochester have featured challenges and speakers from the EPA, for example.) I would recommend reaching out to the national organizers, and seeing if you can get a list of cities/events that are not already partnered with a federal agency, or see if they will put your projects out on blast during the next organizer's call.

Either way, the most important thing for getting new contributions, IMHO, is to be sure you have *clear* action items, that are surmountable in the time of the event, with *dedicated* upstream mentors ready to synchronously provide feedback. That sounded kinda buzzword-y, so:

  1. Clear action item(s) (FIX #1337: CSS Bug on
  2. Clear documentation (README with instructions for getting stack up and running, styleguides, etc...)
  3. Person in IRC/Chat actively answering questions from contributors, and ideally hacking with them.

SecondMuse historically does a great job vetting the "problems" that agencies come up with, so working with them will likely help with that first bullet point.

There is *nothing* worse than spending an entire hackathon trying to get "to the starting blocks" and failing to get a stack just up and running. It is demoralizing, and makes new contributors very discouraged. Be sure that whatever contributions you are looking to garner have stacks that can be trivially installed on Linux/Mac/Windows. (i.e. - shipping a requirements.txt or with your python project, or even better, distributing to for easy installation.)

Having that mentor available to kick down blockers and vgrep tracebacks is the difference between a new contributor spending 3 hours hunting down an error, or a mentor providing that 'obvious-to-them-seen-it-a-million-times-one-liner-fix' in 3 minutes. If you can get mentor that can commit to the *entire* event, that is a super amazing morale boost for new contributors. There is a certain magic in looking in channel or around room and seeing upstream hacking right alongside you in the trenches deep into the wee hours of the morning.

newfangledjstoolschains: Part I

I stumbled upon semantic-ui whilst surfing, and immediately became intrigued. It made sense, and it looked great! I had to try it for my new static blog!

Sure, I could just generate the source for the widgets and css I wanted from their docs, and hand-copied it into my various static folders, but instead, I wanted to attempt to employ the power of build. It has been a long while, but I'm slowly remembering how to wield themtharnewfangledjstoolschains, aka Gulp, and bower, and npm.

Gulp and bower in particular are quite useful. Bower is like pip install for javascript-y libraries. But before using bower, we gotta install node. If there is one thing that I know I love, it is installing random javascript code from the internet onto my machine globally. Everytime I see things like this in a README file, I get very very sad:

$ sudo npm install -g totally_legit_js_library
$ sudo pip install totally_legit_python_library

Please, don't sudo install the things, and certainly not globally, without taking a moment to think if that is desirable or necessary. Don't get me wrong, I sure don't grok all the way to the bottom of every stack I deploy, but this is exactly why I try not to give any-ole stack the ability to run as root.

On top of that, I've been taught you should *almost* always use a virtualenv, or other safe and somewhat isolated micro-universe far away from your system packages.

There are a number of solutions, but my favorite one thusfar has been installing into a python virtualenv! One time at a hackathon, I blindly trusted a teamate who encouraged me to just curl and ./ some shell script off of a website somewhere to get nvm set up, which I begrudgingly did, but have since discontinued the practice of. Here is what I've been doing instead:

Yes, it takes longer, but there is something so satisfying about building from source. I found that after I got node installed, I could npm install -g all-the-things that I needed, and those would be conveniently located within a python virtualenv that I'd likely be using to server up the static content anways.

So far, so good! I've got a working js toolchain, with my desired deps installed! You can find the initial commits here:

Part II Goals

  1. Get node and npm install on my local machine without requiring root
  2. Get a working bower.json package to install the things
  3. Get a working gulpfile.js package to move the installed things
  4. Get a working nikola deploy workflow to run the gulpfile in addition to building the site!
  5. Incorporate semantic-ui cards into fedmsg feed, and possibly other aspects of site.

Stay tuned for Part II!


Welcome to the world of static blogs! I've been neckdeep in a seemingly stable and wonderful project called nikola, that allows for some pretty fantastic python static site generation.

I've been using flask for all my "deploying a quick webapp to openshift" needs, which has worked out splendidly, but after experimenting with I felt like I had to dive in head first and see what all the hubub was about.

Some 4-ish hours later, here we are!

Some caveats:

  • The amazing themes listed at are unvailable to me...
  • I don't even know what I don't know I'm doing wrong yet ;)
  • I'm git push -f openshift master to a fresh php5.4 cartridge
  • Scratch that, I'm no longer force-pushing to production! (but I'm not sure why nikola deploy rsync --delete started allofasudden blowing away .git/ after I had been deploying in such a way for hours this evening...)
  • I've added my articles to my blog via the feed_import plugin!!!
  • I've added my articles to my blog via hand copying the source! Def not as exciting, but still cool.


I'm SUPER excited to play around more with programmatically generating static content with the power of python.

Shout-out lmacken, threebean, and ryansb for their wizardry and patience.


This is a raw dump of brainstormery had during a hacksession with Threebean.


$ sudo yum install python-fedmsg-meta-fedora-infrastructure $ hub clone ralphbean/fedora-stats-tools

The Longtail Metric

Though this was only about 90 minutes of cycling, it is the part that is burned most into my brain. This metric is all about Helping identify how "flat" the message distributions are, to avoid uneven burnout mode... aka, take the agent that is generating the most messages within a time frame (the "Head"), and the agent generating the least number of messages in that timeframe(the "Tail"), and come up with a line drawn between them. The more "flat" that line is, the more even the number of generated messages is amongst all contributors. Still unclear? Me too ;) Here's some python instead:

Logtail.analyze at

    import collections
    import json
    import pprint
    import time

    import requests

    import fedmsg.config
    import fedmsg.meta

    config = fedmsg.config.load_config()

    start = time.time()
    one_day = 1 * 24 * 60 * 60
    whole_range = one_day
    N = 50

    def get_page(page, end, delta):
        url = ''
        response = requests.get(url, params=dict(
        data = response.json()
        return data

    results = {}
    now = time.time()

    for iteration, end in enumerate(range(*map(int, (now - whole_range, now, whole_range / N)))):
        results[end] = collections.defaultdict(int)
        data = get_page(1, end, whole_range)
        pages = data['pages']

        for page in range(1, pages + 1):
            print "* (", iteration, ") getting page", page, "of", data['pages'], "with end", end, "and delta", whole_range
            data = get_page(page, end, whole_range)
            messages = data['raw_messages']

            for message in messages:
                users = fedmsg.meta.msg2usernames(message, **config)
                for user in users:
                    results[end][user] += 1


    with open('foo.json', 'w') as f:

Logtail.analyze at

import json

comparator = lambda item: item[1]

with open('foo.json', 'r') as f:
    all_data = json.loads(

for timestamp, data in all_data.items():
    for username, value in data.items():
        all_data[timestamp][username] = float(value)

timestamp_getter = lambda item: item[0]

sorted_data = sorted(all_data.items(), key=timestamp_getter)

results = {}

for timestamp, data in sorted_data:
    head = max(data.items(), key=comparator)
    tail = min(data.items(), key=comparator)

    x1, y1 = 0, head[1]
    x2, y2 = len(data), tail[1]

    slope = (y2 - y1) / (x2 - x1)
    intercept = y1

    metric = 0

    data_tuples = sorted(data.items(), key=comparator, reverse=True)

    for index, item in enumerate(data_tuples):
        username, actual = item
        # line formula is y = slope * x + intercept
        ideal = slope * index + intercept
        diff = ideal - actual
        metric = metric + diff

    print "%s, %f" % (timestamp, metric / len(data))
    results[timestamp] = metric / len(data)

import pygal
chart = pygal.Line()
chart.title = 'lol'
chart.x_labels = [stamp for stamp, blob in sorted_data]
chart.add('Metric', [results[stamp] for stamp, blob in sorted_data])

Stuff to build/consider next?

Radar Charts

We must be concerned with normalizing the data, because koji will always have highest magnitude of messages. This is done by:

  1. querying all messages of a type, get the total
  2. querying just messages for that user, in that type
  3. divide usermessages/totalmessages
Daily +/-
just the diff of topic counts
weekly +/-
just the diff of topic counts


  • barchart with bar for each message topic?
  • array of "lights" that blink each time a message comes across the bus
  • revisit the live-gource of fedmsg :)

What can we do to improve computer education?

SIGCSE 2015 for Computer Science educators kicks off this year from March 4 - 7 in Kansas City, Missouri.

Headshot Pamela Fox

The SIGCSE Technical Symposium addresses problems common among educators working to develop, implement and/or evaluate computing programs, curricula, and courses. The symposium provides a forum for sharing new ideas for syllabi, laboratories, and other elements of teaching and pedagogy, at all levels of instruction.

Last year Pamela Fox, Computing Curriculum Engineer at Khan Academy, was part of a panel on called "Disruptive Innovation in CS Education." I spoke with her afterwards to get her thoughts on how open source fits into education and the future of computer education.

This is a partial transcript.

Where are you from?

I was born in Los Angeles, grew up in upstate New York. My dad is a computer science professor at Syracuse University. My mom is a rocket science programmer. My dad is launching a "big data" MOOC, so we're both very interested in this field.

Where are you now?

Now, I work for Khan Academy in Mountainview California and live in San Francisco. I went back to the west coast as soon as I could and joined Google after graduation from University of Southern California in Los Angeles. I went to Australia, then got back to bay area three years ago. I was working on Google Maps API in Developer Relations, writing articles and demos, which is basically what I do now, but for non-proprietary technology.

I first learned HTML in the 7th grade. Within a year, I made a website that taught HTML to other people called "htmlforkids" or something (even though I was a kid too.) That was probably my first "official" educational content. After that I was a computer camp counseler. In college, I organized workshops around 3D programming (started a SIGGRAPH chapter). I use Khan Academy to get better at math now.

Why free and open source software?

I really enjoy teaching, and I enjoy trying to figure out how to teach something. I find it fascinating when I put out new course, I read the comments and say, "Wow, I forgot what it was like not to know." I'm interested in humans, I read a lot about how humans work, and behavioral science. There is a lot of that in teaching people, is all about learning. I'm just learning.

I'm generally a fan of open source, and that is another reason why I'm at Khan Academy, where we do that. As a web developer, I shouldn't have to reinvent the wheel. Often I say, "Really... really? I gotta solve this problem? I'm the only one that has ever tried to do this?" No, it's just someone didn't share it. Many of these components should be open source. Some people may say, "Well, we don't have jobs if we don't have to rewrite it." I don't want to believe that we live that way. I have friends with open source projects, who have tried to make money on it, doing enterprise versions, and getting paid for support. I'm always interested in the different ways of monetizing code. I feel like that part is still has open questions.

I think we should encourage sharing, kids are used to the idea of "cheating." Someone copies your code, and they say, "Hey, that's cheating" and we have to tell them, "No, it's MIT Licensed, and it is open source."

We have to teach them that sharing is OK. We have to do a better job of teaching open source and sharing, counter to what they may see in school. I'm upset there is no representation from the coding academies here at SIGCSE. They are trying to figure out how to get people ready for programming jobs in 12 weeks. I feel like I'm here representing that industry. Half engineer, half educator. I feel like I'm representing the "meritocracy" getting a real job thing too.

What can we do to improve computer education?

Coding academies are formed by people who learned alternatively, or didn't do well in college, and they are figuring out how to teach based on industry, and their good and bad experiences in college. They have good things to say about career oriented computer science education. They should be here (at SIGCSE) too. Girl Develop It, Women Who Code, they are all doing similar work, and they are disconnected from this world. I'm not just trying to do women-friendly hackathons but newbie-friendly hackathons too. More women are newbies than men right now, so if you fix things for newbies—people who are intimidated, who don't think of themselves as superstars—you fix it not only for women, but for men that have that same situation. Right now, we have to say "this is for women/girls" but the lines are getting increasingly blurred, and maybe we won't have to worry someday, but for now, we have to bring stuff up to parity good stuff.

I'm quite interested in how we can prepare the next generation for the world with it's concerns about security and privacy. I like reading books like Little Brother by Cory Doctorow, which is a YA book that forces kids to think about these issues. I want to find a way to introduce the next generation to these issues and be relevant. If anyone has ideas on how to do that, I'd like to know.

Lead Image: 
(8 votes)
Add This: 
Article Type: 
Default CC License: 

bitHound puts out features, not fires

The following is a partial transcript from a phone interview with Dan Silivestru, CEO and co-founder of—automated, open source, code quality analysis software.

Where are you from originally?

I was born in Romania, lived there for six years, don't remember any of it. Then my parents went to Israel for seven years, then moved to Montreal, Quebec, and lived there for another seven years. Then I moved to Ontario, and I'm still here.

When did you start bitHound?

We started in November 2013, and I went full-time in January 2014.

How many folks are at bitHound now?

There are nine of us. A CTO, COO, development team of four, plus staff to handle operations and HR.

What is bitHound, and what do you do?

We are centered around the concept that writing code is easy, but building resilient, remarkable software is difficult. There is much that can be told as you go along. We analyze projects from conception to today, pointing out hotspots that require attention, and suggestions on how to fix them. We track code as you move forward, so we can say if things are getting better or worse.

Something I'm proud of is a feature that showcases the dependencies that your projects have from npm and Bower, for example. It helps you understand the code that you bring into your project, and then rank it from a quality perspective. The dashboard shows you up-to-date or out-of-date status, as well as assigns you a bitHound score that is derived from Code Quality, Maintainability, and Stability. You can then pick better dependencies based on quality level. You can really dive in with bitHound.

Does bitHound support other programming languages besides JavaScript?

It is JavaScript only for now, but in the future there could be more. Rather than just the bare minimum, we think that to provide value, we've got to do a deep dive into a language. We run almost a dozen different "critics" or analyzing engines, to get "actionable insight."

(Remy: Completely understandable. Source code analysis is not what I would call a "trivial" problem...")

It is not an easy problem, it takes a lot of time and effort. We've been at it for for about a year, and it is still in closed beta.

What is it like for a bitHound user?

We strive to make the user interaction with product very simple. We think that if your software needs a manual, you're probably doing some things wrong.

The experience is simple: use OAuth for GitHub. You enable bitHound on a per-repository basis. We run our analysis, and it takes 2-20 seconds, and then we fill-in the timeline going backwards.

The idea was, on the first dashboard, you would get an "eagle-eye" view: the top five priority files, and you can expand the list further. We've had many users who are new to the concept of quality concerns such as linters, duplicate functionality, etc. So, rather than presenting an overwhelming amount of information, we present the top five most worrisome files and annotate the code with issues, so you can filter and address them. You can see on your dashboard, which dependencies are out-of-date, and we have some upcoming security analysis features in the works too.

(Remy: This sounds like it would be useful for researchers. Students in my HFOSS course at RIT have to do repository analysis as part of our "Commarch" assignment each semester.)

We have some students who use our products, and are getting introductions to professors. It seems only recently that source control is even being taught at the college level. When dealing with JavaScript, which doesn't really benefit from compiling, linters are a life-saver. The students really appreciate it. We consider these very simple things.

A big part of what we want to do behind bitHound is answer: "How can we get people to build quality code?" You have to treat your job as a craft. It is craftsmanship, and proper tooling around making software.

When did software craftsmanship become a passion for you?

It is one of those things where you get burned. You get burned in production once, twice, and then again. Then you say: "How did I get here?"

I'm self-taught from a software development process. Much of what I learned was learned "on the fly." Having gone through institutions, some left me better, some worse. The first five years of my career was focused on delivering features on time. Then I got introduced to this concept of: "If you are going to cut corners, you need to document it." When we started doing tests, though the upfront work was higher, six months later, we saw big benefits. Even as systems got more complex, you have safe-guards in place. You can go back and fix it. We were able to keep our bug-count down.

We were putting out features, rather than putting out fires.

Per feature costs, we're much lower. In the long-run, it allows your organization to move forward at a steadier, and faster, pace. Then again, at other places I've joined, they were on their second or third full rewrite. It didn't happen overnight. I didn't just wake up and say: "Test test test." It wasn't until after getting burned...

How did you get into software development?

At the University of Waterloo, honors science, then honors physics. Then I took time off to make money. While I was working at a company, I had a friend working in IT, while I was working on phones. I asked him: "IT? What do you do?" He showed me AS/400 systems and greenscreens. I asked: "How do I do that?" And the next thing I knew, I was sitting in front of the VP asking to do it.

I got a AS/400 manual, and the opportunity and big break to do that. I did that on my own time for a few weeks, and after a few months there, I said: "This is the career I want" and never looked back. I had some tremendous mentors along the way. I was there for a few years, then went into "e-business" doing consulting.

What makes a good mentor? Where and how do you find them?

The number one trait for mentors I've had, to this day, was selflessness. They were doing it for the pure joy of helping someone else develop their craft. They are not about: "I'm teaching to get something out of you for free." Obviously, they have to be knowledgeable, but you can tell more about them as a mentor by how they carry themselves.

If the first five years of my career was about: "How do I code?" After that it was about defining components that interact together. Later on, after my first consulting position, I had a new mentor, with new questions.

Dan: "We should write tests... Why?"

Mentor: "How do you know what the interface between components really looks like?"

Mentors have to be good at their craft, but you can tell a lot by the questions they ask. Listen to how they go about development and the way they ask questions.

What does your day-to-day look like?

I wrote quite a bit of code when we started, but I'm sure much of it has been rewritten. Since we announced funding in late November, it has been more about investor follow-up for me. I've matured within the code environment and in the running-a-company environment. Mostly I'm steering the ship. Day-to-day, lots of emails, some interviews like this one, working with team to set priority/strategy, and yes, I still write some code. I'm not anywhere near the critical path anymore though :P

What are your feelings on free and open source software / FOSS?

I was a co-founder of a company, tinyHippos, that we founded in 2009—which was acquired by Blackberry in 2011. One of the visions, was open sourcing the Ripple Emulator. That happened, and it was fantastic. There were only three of us that moved over, in terms of "how you run a project in FOSS."

I'm proud that they took this project, and donated it to the Apache Foundation. It sits side-by-side with PhoneGap. There was great experience in "how to foster a community" and "how do you be a BDFL?" (benevolent dictator for life). Someone who moves a project forward in a community.

We've loved FOSS throughout our career, and use it constantly at bitHound. Our analysis depends on many popular frameworks. JSHint for linting, esprima, async for callback structure, ZeroMQ for distributed parallel computing platform. You can check out our talk at JSConf last year about distributed complex computing.

Any others?

Yes! We make use of over 80 open source projects throughout our solution but a few that come to mind are d3, jquery and Polymer in production.

Where is bitHound contributing back?

Right before starting this company, Gord Tanner was the core contributor to the Apache Cordova Project; he created of the Ripple Emulator, which he donated to the Apache Foundation and is used today by Microsoft, Intel, Adobe, and over 250K developers. He unified the platform in a coherent manner, and still contributes there. For bitHound, he is a co-founder and CTO leading the technical development of services.

bitHound has simple philosophy, while we're in heavy product build, is about product, but we have come across projects in our stack that have issues. We always contribute any fixes or additions back into that project. That is our standard operating procedure. If we need to make a specific change for use that has no benefit to community at large, we don't push it, but if we fix a bug or feature, we contribute it back upstream always. That is a recommendation to any company out there. If you are going to get something for free, from someone's hard work, then if you enhance it, you should contribute it back so all can benefit. Otherwise, the community would die if everyone consumed and no one contributed.

Internally, we have components that we think will be beneficial. There will be a prominent links on our site to our GitHub.

One component is what we call "The Farm" where we have workers assigned to do work in parallel. A simple event bus really, with aggregate results coming back. We're all often dealing with the single-threaded nature of the JavaScript language, and you'll see us trying to open source that.

One thing that has happened internally with The Farm, is we've already abstracted into a separate project, to be a released. This is one of the things I've realized in my career—all of us have—just putting something on GitHub and calling it "open source" is not enough for it to take off. It must be prepared and ready for the community. That means proper README, proper docs, proper instructions for looking at project and contributing, beyond downloading and installing. Anyone can npm install but it is a different thing to make it so that contributors can understand how to augment. We will be taking our time putting our code out there, because we wanna do it right.

Any final message or parting thoughts?

Don't just write code, consider what you are doing as a craft. It takes time, and practice, and it takes time to build something that is resilient and beautiful. Open source is a great way to perfect that craft. One reason we built the dependency tool into our product was to get more people diving into code they can contribute to. Seeing other people's architectures, will expose you to better approaches.

This is sort of what spawned bitHound.

Software development is a craft, and you should be proud of it. Take the time, learn the craft, and strive to build masterpieces—the ones that gather great attention in the community. We're in this to do more than just make money, and bitHound will be free forever for open source, with no restricted features. We're huge believers in this movement. We participate, and we want to help.

This work by Remy DeCausemaker is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Special thanks to @itssamlowe for her contributions and edits.

Lead Image: 
(7 votes)
Add This: 
Article Type: 
Default CC License: 

The elements to a better future for software

In this interview, I take a deep dive into the life and motivations of Kyle Simpson, an open web evangelist and the author of the book on javascript, You Don't Know JS. Find him on GitHub and see his many projects and posts on

Where are you from?

Oklahoma City, born and raised. Started school in Oklahoma, but now based in Austin, Texas—since mid-way through college. I live there with my wife and two kids. I moved to Austin because there wasn't much of a tech community in Oklahoma back in the 90s, and Austin was the nearest big tech hub. Now, I go back to Oklahoma to visit and see they have a fantastic community there, and I'm jealous! It's great to see!

Where did you go to school?

I started at University of Oklahoma, then transferred to and graduated from Texas State University with a B.S. in the engineering track of Computer Science.

What is your day-to-day like?

I have two different kinds of days; days where I speak/teach, and days where I do FOSS development. On teaching days, I'm connecting with community, and teaching JS to make a living—mostly in a corporate workshop environment, or public workshops associated with conferences. Those days I stand all day, and teach, and lecture, and walk people through exercises.

When not on the road doing that, I'm participating in the FOSS community—writing code, or writing blogs, or books. I spend lots of time doing that, constantly on GitHub with commits and pull-requests flying everywhere. I currently have a 300+ day streak going on GitHub—not to show off—but to inspire others to do more, and more regularly, with FOSS contributions. If my streak can encourage one person do one extra contribution, that's what it's all about.

The best way to describe it: 50% of my time I spend teaching to pay bills, and 50% donating time to the FOSS community, to build awareness around the web platform and its technology with the theory of "all boats rise with the tide." The more people who learn and appreciate web tech, the more people will hire me to teach it to them. I'm an avid learner of things, and the best way to learn is to teach others. I think "how can I make this make sense to others?" As soon as I learn something, I write code to explain it, find a book or post to describe it in, and if I find something I didn't understand, branch off, and learn more, then start the cycle again. It just gets deeper and deeper and deeper.

I said a while back that, "I think it is important for developers, especially those breaking into industry, find ONE thing you love to learn, and master it." It may not be the one thing you write all the rest of your code with, but it is the process of sticking with something to mastery that is valuable. Don't just jump from thing to thing to thing. While you may get a good paycheck doing that, there is something missing from the art of deeply understanding something. Once you've accomplished that, and you know what there is to know, then branching out to try things is great! Be looking while you branch for that next thing you want to master, rinse, and repeat. Constant jumping around as a "jack-of-all-trades-master-of-none" was more relevant 5-10 years ago. What is missing now is people who really know what they are doing.

Our industry currently rewards "flexibility" and working at the whim of someone else. "Yesterday, we wrote everything in Angular, and today, we're going to rewrite everything in React..." After enough of those inflections, you "become" a senior developer, but you miss out on appreciating a technology in the way it really deserves with deep understanding.

Mastery? How?

Well, specific answers are variable. Angular will be much different than Node. In general, the important skill is the curiosity and desire to learn. Don't just read a line of code and say, "I guess that is just how it works..." Keep reading, and keep following the rabbit hole down until you can say you understand every part of that line of code. I tell my workshop attendees that I don't expect you'll write your own framework, but that you could. Don't treat frameworks as black-boxes—you need to understand them intimately. If you choose something, know how it works but also WHY. Knowing when to change comes from understanding why—not because there is a great book, or how many "stars" the repo has. Those are poor signals. Beyond understanding of the open source community, your own understanding is the strongest signal.

You don't have to reinvent the wheel, but you should understand how the wheel rolls before you decide to bolt it onto the car you're building.

How did you get started in FOSS?

I was working for a company, not as a developer, but as a "User Experience (UX) Architect." I worked in project management team prototyping User Interfaces (UIs), and handing them off to the dev team. Inevitably, everything I wrote was just put into production, or adapted slightly. I was working on a project in 2008 that needed to make cross domain Ajax requests, and back then it was a real pain. I needed a solution to prove out my concept for the app, and I said, "I know some Flash, and I know that it can do that." So I built a JS API wrapper around an invisible flash file, with the same API as the XMLHttpRequest (Ajax) object, and I called the project flXHR (flash based XHR).

Once I got it working, I thought, "Maybe other people will find it useful?" so, I released my code as open source. Back then, open source was pre-GitHub, so source was all on my website, and I pointed people at it from blog posts, etc. I also put code on Google Code too, but there wasn't as much of a community back then either. In early 2009, I wanted to get into conference scene. 2009 was the first big JavaScript-specific conference, JSConf, and so I decided to go and speak about SWFObject (one of the most downloaded projects on the web at the time), which I was using heavily in flXHR. I was a core dev for SWFObject and gave a "B track" talk at the conference. Only like three people showed up to my first talk, but I fell in love with the idea that I could speak to call attention to open source code and inspire others to help make it better!

The fullness of my open source perspective came later that year, in November of 2009. I released the project I'm probably most known for: LABjs (a performance-optimized dynamic script loader). I gave a talk at JSConfEU in Berlin Germany about script loading. Two hours before going on stage, I was overhearing lots of people talking about this new site called GitHub, so I went and signed up while I was sitting in the audience. I pushed all my LABjs code there, and that was my first official: "I am in the FOSS community" moment.

One thing you wish undergrads would be exposed to before the leave school?

Unquestionably, "Simple Made Easy," a conference talk by Rich Hickey, who works at Cognitect on the Datomic database as well as the Clojure language. He's a completely brilliant dude. The talk is so important to me, I don't have just have it in a bookmark, but on my toolbar and reference it practically daily. The premise is, there are two terms that people conflate: "simple" and "easy." He actually compares "complex" versus "hard." The root word for complex comes from "complected," as in strands of rope being braided together. Highly braided code is complex, and harder to maintain and refactor. Software developers, when building, they focus on building "easy" software—that is easy to install and use, and does a lot for you. That pursuit often results in complex software.

If developers go after modular simple (non-complex), non-braided software, they can often end up with easy software too. If you go after easy, you usually end up with complex, but if you go after simple you can also achieve easy.

Node.js is my example. I was trying to install it on a Virtual Machine, and had the operating system (OS) requirements, but couldn't install it because the OS didn't have the proper version of Python. Node, a JavaScript framework, uses Python for its installer? Why do I need Python to install Node??? The answer was because writing a cross-platform installer in Python was easier... but when you add additional braiding, you can also make it more complex to implement and maintain.

Nearly every framework on the planet claims to be modular, but most are not. Modular, to me, means that a piece could be removed, and the framework would still be able to be used. "Separate files" does not a modular framework make, if all those pieces are required for the framework to work! My goal, my desire, is that developers go after simple modular design, and that be the most important ethic. What comes from that then, is proper design that can be made easy for people to use. We need to stop worrying as much about creating pretty-looking, "easy" interfaces, and instead worry a lot more about making simple software.

What is your toolchain?

Sublime is my text editor. In principle, I love browser-based editing, but I run the nightly versions of my browsers, to find bugs early while I still have a chance to get them fixed. I can't handle browser crashing and uncertainty for when I'm writing code.

Sublime has so many plugins you can use for whatever you want. Though I don't use many, other people like the "intellisense" plugins, and many other plugins that are part of a great ecosystem they've built.

My other main tools are the browser developers tools in Firefox and Chrome.

My other mission critical tool is the Git command-line tool. GitHub is my graphical git client, because it effectively augments my usage of the git CLI.

Git. How do you use it?

I don't have a lot of fancy process, it depends on whether I'm writing a book or writing code. For books, when I make a change, I want to write a coherent section, and make one commit per section. In the writing of one section, I may add to the Table of Contents, or clarify another section, or add another. Whenever I have a logical series of changes, I git add each individual file, (files written in markdown BTW), and git commit -m. In the commit message, I list which book the commit portrays to, which chapter(s), and a quick description of what it was about. The commit history of the book series really tells a story in and of itself, of how over months I figured out how to write them, section by section, reorganization by reorganization!

I typically use git commit -m ".." && git push, so that I push right after committing.

It is not often I do batch committing, usually only when I've been on an airplane without wifi for awhile, in which case I'll push 5-8 commits at a time once I get back online. Usually, I try to push right after I finish the section I'm working on.

For code, I have two different strategies. If it is a "big" feature, I create a feature branch, and I put several commits into the feature branch. The goal isn't to finish the feature and do a massive merge, but to regularly merge. I like to develop in stable batches, merge regularly, and don't do harm to master. If I do make a bugfix on master while developing a feature, I rebase the feature branch to get that fix in. I don't necessarily do short lived branches, but I do short lived differences. :)

Many devs do squash merges, and want to appear to have "Dreamed up this perfect feature and written it perfectly all at once." I don't want that. I want to preserve the history. In rare cases with a pull-request that has lots of individual commits that are all logically connected, I'll do a squash-merge.

In cases where I have a simple bug fix to make, I'll generally just add and commit directly to master. Regardless, Every time I'm doing the final commit, I'm committing both the docs and tests. I Firmly believe that it isn't DONE until it has docs and tests. I don't really do Test Driven Development (TDD), but Test-oriented or Test-informed development. I have a set of tests, and sometimes they are written ahead, but the typical plan is "I don't know how it should behave" when I fix something with a new feature--it will take me working through implementation to know. I develop the tests along with the code--code and test--rather than writing code after tests or the other way around.

I'm much more formal when working on other peoples' projects, or as a bigger team. I try to stay away from scenarios where I need the complicated cherry-picking or interactive rebasing features of Git. I've done those things only a few times in my career. I use GitHub for most of those things, and it handles those cases pretty well. A pull-request with 2-3 commits, whatever their process was, is something that is useful to preserve in the history, so I'll usually just merge it as-is.

What are you currently working on?

Other than my books, I have three main areas of project interest I cycle through on any given week.

Number 1 that gets most of my interest is asynchronous ("async") programming patterns (promises and generators, that sort of thing). I have a library called asynquence, a promises-style asynchronous library. It can also handle generators, reactive sequences, and even CSP. (see: Hoare's seminal book "Communication Sequential Processes") with these higher-level patterns layered on top of the basic "sequence" abstraction. Most other libraries have just one flavor of async programming, but I've built one that can handle all the major patterns. I think async is one of the most important things that JS devs need to get up to speed on. I've got several conference talks and projects about that topic.

We're recognizing more and more that sophisticated programs need more well-planned and capable async functionality. Callbacks alone don't really cut it anymore.

[Remy Decausemaker: Yes, I reckon this jives well with Python incorporating Tulip and features from Twisted into the core librarystarting with Python 3.3.]

Number 2 is in the same vein as the "compile to" languages for JS. Experimentation is important for the language. Taking that to it's extreme, I have a set of tools to define custom JS syntax and transpile to standard JS—basically, standard JS + custom syntax. I'm working on tools that do "little" transformations on your code. The bigger picture is "inversible transforms" or able to transformed in both directions, non-lossy transformations. If you can define them two-way, you can have your "view" for your own editor, and a "view" for the team repository. You check code in and out, and you can work on code the way your brain works, and the team can work in the way their's does.

When you use CoffeScript for example, it is a lossy transformation, and an "all or nothing" decision. Everyone needs to work on it in this way, or not at all. The simple version of what my tools can do is simple stylistic things like spaces versus tabs. The tools can change that code style for you instead of just complaining with errors.

ESRE is one such tool I'm building for two-way code-style transformation.

let-er is another tool that transpiles a non-standard version of JS block-scoping into standard JS block-scoping. I have a series of in-progress prototypes of these various tools, and eventually I can go back and write the overall "meta" tool that drives them with the two-way transformations.

Number 3 is a crossover between JS/CSS. It is a project in the templating world. There are two extremes in templating; zero-logic templating or full programming language templating. Zero-logic templating includes projects like Mustache. We don't want business logic in the views, so we use no logic at all. But in practice, this creates very brittle controller code that's closely tied to the structure of the UI, and that brittle connection is precisely what we wanted to avoid by keeping the concerns separate.

The other extreme, is you have a full programming language in your templating. My metaphor is "if I hand you a pile of rope, I can teach you to build a rope-bridge, helpful, or a noose, which isn't quite so helpful." If you are in a "15-minute-must-do-feature crunch" you'll just drop in if-statements and function calls, and then put a TODO comment to fix it, but then you rarely do. That's how we unintentionally leak business logic into our views.

Neither extreme is good enough. We need something in the middle, that has enough logic for structural UI construction, but keeps out all the mechanisms that you can abuse to do business logic.

For 4-5 years, I've experimented with a templating engine that is a happy medium, called grips. It has enough structural logic, but is restrained so that you can't do things like function calls, math, etc. It's mature enough that I use it in my projects and have rolled-out production websites with it. It is definitely a work-in-progress, but it is "stable enough" to be used. People still to bikeshed about the syntax for sure and may not like the choices I made. But I think I at least asked the right questions, like: What does a templating engine need or not need? I started with nothing and only added features when it was necessary to do structural stuff. You have basic looping and conditionals, but in limited fashion. I summarize that balance as, "if you find yourself unable to do something, it should be a signal that you don't need it in your templating engine."

Two years ago, I started watching the rise of LESS, SASS, and other tools like COMPASS. What struck me was how limited they were in solving the problems I thought were important in CSS world. Those tools require the CSS to be recompiled every time you make a change. "Compile a HTML template, re-render with external data" is a solved problem. For some bizarre reason, it didn't occur with CSS though.

So, I invented grips-css, a CSS templating syntax similar to LESS, on top of the core grips templating engine. Most importantly with grips-css: data is external (i.e. CSS variables), which means all the data operations that projects like SASS are inventing declaritive syntax inside of CSS to handle, instead you can and should do those data operations outside of CSS, producing new data and then just re-rendering the template.

If I wanted to change "blue" to "red," I don't need to recompile all my CSS, I can take my pre-compiled CSS, and just re-render it with the different variable data.

The compiled CSS template is basic JS, which means you have the option of re-rendering CSS dynamically in the browser on the fly, for example responding to changing conditions with CSS. It's much cleaner to simply re-render a snippet of CSS and inject it into the page than to use brittle JS code to change CSS style properties. Of course, you can also run grips-css on the server much like you currently do with current preprocesors. The point is you have both options with grips-css, instead of being limited to server-only and inefficient total recompilation. What I'm trying to do is suggest that the spirit of what SASS and the others are going for is good, but the way they are going about it is limited and not terribly healthy for the future of CSS.

CSS templating is, I think, a much cleaner and more robust way to push CSS tooling capability forward.

You mentioned important problems to solve in CSS? What are they as far as you are concerned?

Three main things were solved in LESS. Variable data, that can be changed and reused, structural things like mixins to achieve DRY coding. And, extends, which is a light version of polymorphism to override pieces of templates. We needed to solve those things, and they did, but as I said, we solved this in text templating years ago, and we should apply those same principles from HTML/text templating to the world of CSS. There's no reason CSS needs to invent its own solutions for these problems.

So, what is next?

Putting on the "prognosticator hat," what do I think we'll see in the next 3-5 years?

Applications are going to become "UI Optional." The new Apple Watch has a pretty limited display, and some apps won't show anything at all. Things like Google Glass, or Oculus, you'll have apps that don't have any visual representation at all. This is what I call the coming "APIs-as-Apps" era. Your "app" might be nothing more than a piece of code that can send and receive data—a distributed API. We have some companies that build apps that care greatly about branding. Twitter wanted you to experience their app the way they wanted. Facebook wanted you to experience the Facebook app the way they wanted. But there is a reality that people will experience apps without your UI at all. Companies must give up control of the presentation, as our devices and interactions with them diversify from purely visual to audible or tactile interactions.

My watch may read things to me without UI, and that is nothing more than a data operation. Facebook should provide the text for my watch to read to me. The UI doesn't necessarily go away, but it becomes an optional add-on to apps. In the longer term, I'd like to stress the decoupling more. We see people building single-page, complex, front-end driven apps. Most of the app is in the front-end. Gmail is cool to use, sure, but I don't think they are very flexible in that new optional-UI trend. It will be hard to separate Gmail the App from Gmail the UI.

Developers are making assumptions about access to unlimited, fast bandwidth with every retina image served up... We're not designing things in layers the way we know we should be. For all the people on slow connections it's just, "Meh, they'll get better access eventually." We need to give users tools in the browser to choose what is important to them. I should be able to say, "No, I don't want a huge single page Gmail app, I need a simple post-in-page mobile version." This is much more than just expecting a "mobile site." We need layered sites.

We need to take a serious look at how much we assume that UIs and data bandwidth usage are an unlimited resource. This could be like "responsive 2.0"—responsive not just to screen layout, but to network conditions too. The app should figure out that I am roaming and not shove everything at me it possibly can. UI needs to be decoupled, simplified, layered, and more focused on portable apps.

I heard a conference talk years ago from PPK (Peter Paul-Koch). He suggested, "Why is it I can't send a text to share an app with you? Why do you have to buy it from an app store?" He proposed that monetization would shift from the app to the data. He believes apps should be self-contained portable pieces of code that can be freely shared around regardless of device. JS is great for this because it is ubiquitous. For instance, if Facebook wanted to charge me for data, because there was no UI on my device within which to serve ads to me, I should be able to decide if I want to pay them for the data of my updates.

I hope that kind of thing represents the future of the web and the usage and consumption of apps.

Lead Image: 
Open source code for a better food system, code with grass image
(9 votes)
Add This: 
Article Type: 
Default CC License: 

Open source all over the world

After a full day at the annual meeting of the Community Moderators, it was time for the the last item on the agenda which simply said "Special Guest: TBD." Jason Hibbets, project lead and community manager for, stood up and began explaining, "In case it wasn't going to happen, I didn't want to say who it was. Months ago I asked for any dates he'd be in town. I got two, and picked one. This was one day out of three weeks that Jim was in town."

The moderators, in town from all over the world for the All Things Open conference, stirred at the table. Their chairs squeaked and snuck a few inches edgewise.

"We're going to get a half hour to hear from him and take a couple questions," said Jason.

The door opened, and as if it had been waiting for him the whole time, the only vacant seat at the head of the table was soon occupied by a tall fellow.

"How is everyone doing?" said the man. No suit, just a button down shirt and slacks.

The next tallest man in the room, Jeff Mackanic, senior director of Global Awareness at Red Hat, explained that the majority of the Community Moderator team was present today. He asked everyone to quickly introduce themselves.

"Jen Wike Huger. Content Manager for Happy to have everyone here."

"Nicole. Vice president of education at ByWater Solutions. We do FOSS for libraries. I travel and teach people how to use software."

"Robin. I've been participating in the Moderator program since 2013. I do lots of stuff for OSDC and work in the City of the Hague, maintaining their website."

"Marcus Hanwell. Originally from England, I'm now at Kitware. I'm the technology lead on FOSS science software. I work with national labs and use things like Titan Z doing GPU programming. I've worked with Gentoo and KDE. Most of all, I'm passionate about joining FOSS and open science."

"Phil Shapiro. I administrate 28 Linux work stations at a small library in D.C. I consider these folks my coworkers and colleagues. And it's wonderful to know that we can all feed into the energy and share ideas. My main interests are how FOSS intersects with dignity, and enhancing dignity."

"Joshua Holm. I spend most of my time staring at system updates and helping people search for jobs on the Internet."

"Mel Chernoff: I work here at Red Hat, primarily on the government channel with Jason Hibbets and Mark Bohannon."

"Scott Nesbitt: I write for many things, but have been using FOSS for long time. I'm a 'mere mortal' just trying to be more productive, not a sysadmin or programmer. I help people meld FOSS into their business and personal lives."

"Luis Ibanez: I just joined Google, but I'm interested in DIY and FOSS."

"Remy DeCausemaker: Resident Hackademic at the RIT MAGIC Center and Adjunct Professor for the Department of Interactive Games and Media. Been writing for for about four years now."

"You teach courses for the new FOSS Minor then," said Jim. "Very cool."

"Jason Baker. I'm a Red Hat cloud expert, mostly doing work around OpenStack."

"Mark Bohannan. I'm with Red Hat Global Public Policy, and I work out of Washington. Like Mel, I spend a good deal of time writing for, or finding folks from, the legal and government channels. I've found an excellent outlet to discuss positive things happening in government."

"Jason Hibbets. I organize the organized chaos here."

The room has a good chuckle.

"I organize this chaos too, you could say," says the brownish-red haired fellow with a gleaming white smile. The laughs grow then quieten. Breaths become baited.

I sat to his left and had a moment to look up from transcribing to glance up. I noticed the hint of a smile behind the knowing eyes of a man who has led the company since January 2008, Jim Whitehurst, president and CEO of Red Hat.

"I have one of the greatest jobs on Earth," began Whitehurst, as he leaned back, crossed his legs, and put his arms behind his head. "I get to lead Red Hat, travel around the world and see what goes on. In my seven years here, the amazing thing about FOSS, and, broadly open innovation, is that it has left the fringe. And now, I would argue, IT is in the same place that FOSS was in its early days. We are seeing FOSS going from an alternative to driving innovation. Our customers are seeing it, too. They're using FOSS not because it is cheaper, but because it provides them with control and innovative solutions. It's a global phenomenon, too. For instance, I was just in India, and discovered that, for them, there were two reasons for embracing of open source: one, access to innovation, and two, the market is somewhat different and wanting full control.”

"The Bombay Stock Exchange wants to own all the source and control it. That is not something you would have heard five years ago in a stock exchange, anywhere. Back then, the early knock on FOSS was that it was creating free copies of things that already existed.' If you look today, virtually everything in big data is happening in FOSS. Almost any new framework, language, and methodology, including mobile (though excluding devices), are all happening first in open source.”

"This is because users have reached size and scale. It's not just Red Hat—it's Google, Amazon, Facebook, and others, they want to solve their own problems, and do it the open source way. And forget licensing—open source is much more than that. We've built a vehicle, and a set of norms. Things like Hadoop, Cassandra, and other tools. Fact is, open source drives innovation. For example, Hadoop was in production before any vendor realized there was a problem of that scale that needed to be solved. They actually have the wherewithal to solve their own problems, and the social tech and principles to do that. "Open source is now the default technology for many categories. This is especially true as the world moves more and more to content importance, such as 3D printing and other physical products that take information content and apply it.”

"We have this cool thing in one area, source code, but it is limited. But there are still many opportunities in different industries. We must ask ourselves, 'What can open source do for education, government, and legal? What are the parallels? And what can other areas learn with us?'"

"There's also the matter of content. Content is now free, and we can invest in more free content, sure. But we need free content that has a business model built around it. That is something that more people should care about. If you believe open innovation is better, then we need more models."

"Education worries me with its fixation on 'content' rather than 'communities.' For example, everywhere I go, I hear university presidents say, 'Wait, education is going to be free?!' The fact that FOSS is free for downstream is great, but the upstream is really powerful. Distributing free courses is great, but we need communities to iterate and make it better. That is something that a lot of different people are doing, and is a place to share what is going on in this space. The question is not so much 'How do we take content?' as it is 'How do you build and distribute it? How do you make sure it is a living thing that gets better, and can morph for different areas?'"

"But the potential to change the world is limitless, and it's amazing how much progress we've already made. Six years ago we were obsessed about defining a mission statement. We started by saying, 'We are the leader,' but that was the wrong word, because it implied control. Active participant didn't quite get it either... Máirín Duffy came up with the word catalyst. And so, we became Red Hat, the company that creates environments to agitate action and catalyze direction.”

" is a catalyst in other areas, and that is what is about. I hope you see yourselves this way, too. The quality of content then, when we started, versus now, is incredible. You can see it getting better every quarter. Thank you for investing your time. Thank you for being catalysts. This is a chance for us all to make the world a better place. And I'd love to hear from you."

I stole a glimpse of everyone at the table: more than a few people had tears in their eyes.

Then, Whitehurst revisits the open education topic of conversation again. "Taking it to an extreme, let's say you have a course about the book Ulysses. Here, you can explore how to crowdsource a model and get people to work together within the course. Well, it's the same with a piece of code: people work together, and the code itself gets better over time."

At this point, I get to have my say. Words like fundamental and possibly irreconcilable came up when discussing the differences between FOSS and academic communities.

Remy: "Retraction is career death." Releasing data or code with your paper could be devastating if you make a mistake. School has always been about avoiding failure and divining 'right answers'. Copying is cheating. Wheels are recreated from scratch ritualistically. In FOSS, you work to fail fastest, but in academia, you invite invalidation."

Nicole: "There are a lot of egos in academia. You need a release manager."

Marcus: "To collaborate, you have to show the bits you don't understand, and that happens behind closed doors. The reward model is all about what you can take credit for. We need to change the reward model. Publish as much as you can. We release eventually, but we want to release early."

Luis: "Make teamwork and sharing a priority. And Red Hat can say that to them more."

Jim: "Is there an active role that companies can play in that?"

Phil Shapiro: "I'm interested in tipping points in FOSS. It drives me nuts that the Fed hasn't switched to LibreOffice. We're not spending tax dollars on software, and certainly shouldn't be spending on word processing or Microsoft Office."

Jim: "We have advocated for that. A lot. Can we do more? That's a valid question. Primarily, we've made progress in the places we have products. We have a solid franchise in government. We are larger per IT spend there than the private sector. Banks and telcos are further along than the government. We've done better in Europe, and I think they have less lobbying dollars at work there, than here. This next generation of computing is almost like a 'do-over'. We are making great progress elsewhere, but it is concerning."

Suddenly, the door to the room opened. Jim turned and nodded towards his executive assistant standing in the doorway; it was time for his next meeting. He uncrossed his legs, leaned forward, and stood. He thanked everyone again for their work and dedication, smiled, and was out the door... leaving us all a bit more inspired.

Lead Image: 
(7 votes)
Add This: 
Article Type: 
Default CC License: 

You don't know Javascript, but you should

This is a partial transcript of a meeting with Kyle Simpson, an Open Web Evangelist from Austin, TX, who's passionate about all things JavaScript. He's an author, workshop trainer, tech speaker, and OSS contributor/leader.

Thank you all for having me. I'm Kyle Simpson, known as "getify" online on Twitter, GitHub, and all the other places that matter. I was here in Rochester teaching a workshop for the Thought @ Work conference this past weekend, and figured I'd stick around to check out some JavaScript (JS) and Node classes here in the New Media Interactive Development program, so thank you for having me.

I have been writing a book series on JavaScript called You Don't Know JS. The entire series is being written in the open, up online on GitHub for free reading. They're also being professionally edited and published through O'Reilly. There are five titles planned for the series: two have already been published, the third is complete and in final editing, the fourth is almost complete, and the fifth one will commence soon.

  1. Scope & Closures: Covers closure primarily, which is one of the most important foundational topics. All JS programs use closures, but most developers don't know that they're using it, or what to call it, or just how it works.
  2. this & Object Prototypes: Covers the mystery of how the this keyword works, and then tackles the misconception that JS has classes—not true! Instead, JavaScript has prototype delegation, and we should embrace that rather than trying to fake class orientation.
  3. Types & Grammar: Goes deep into coercion, the mechanism most people think is evil in JS. I encourage you to dig into it and learn it, because coercion not only isn't as bad or weird as you've been told, but it can actually help improve your code if you learn how to use it properly!
  4. Async & Performance (in progress): Explains why callbacks for async programming are insufficient, then goes deep into promises and generators as much better async patterns. Also covers optimizing and benchmarking JS performance.
  5. ES6 & Beyond (planned): Covering all the changes to JS coming in ES6, as well as forward looking to beyond-ES6 evolution on the horizon.

To understand the spirit of this series, compare it to JavaScript: The Good Parts by Douglas Crockford. His book was both good and bad for our community. It's almost single-handedly responsible for bringing lots of developers to (or back to!) the language and giving it serious attention. We owe a lot to him for that. But it also taught developers that there is only a small part of the language you need to learn. And because you only have to learn a little bit of it, that's all most developers ever learn. Even developers with 5 or 10 years JS experience know comparatively very little of the language.

My books are the opposite. They're the anti-"The Good Parts." That doesn't mean they're the bad parts, it means they're all the parts. Rather than avoiding most of the language because one guy said to—rather than running away from the hard parts—I encourage you to run towards "the tough parts" and learn them. When you see something in JS that you don't understand or is confusing, instead of blaming the language as being poorly designed, turn your attention toward your own lack of understanding, and spend the effort to increase your understanding.

This is somewhat unique to JS developers, that they expect a language should be so simple and intuitive that merely glancing at it should be enough to understand it, and that if they can't, it's a failure of the language. Expecting perfectly self-explanatory syntax and rules wouldn't be reasonable of any other language, like Java or C++. If you were confused by code, you wouldn't blame the designers of those languages. You'd blame either your own understanding, or at least that of the person who wrote the code. Either way, learning the language better is the best solution to that lack of understanding. Many times, when developers hate something about JS, it turns out it's because they simply don't understand it enough. When I explain how it works, many times they go from hating it to appreciating it—and by the way, appreciating doesn't mean liking, it just means respecting.

I believe JavaScript takes time to learn properly and completely, and that if you're going to write it, then you should invest that effort. You should understand why the code that you write works the way that it works. Instead of saying "it works, but I don't care how," the most important question you can always ask is: "How does it work, and WHY?" I'm an open web evangelist who teaches JavaScript for a living. I work with developers all the time who've learned JS incompletely and improperly, and they're having to fight hard against the grain to re-learn it. That's why I'm so encouraged to see you learning JS in university. By learning JS properly here in school, you can graduate and come right into the industry as a new generation of developers that already understand and appreciate the importance of JS as the standard for the entire web platform.

JS is going to be the foundation of the web platform for the rest of our careers. We might as well get to know it better!

I'll leave you with this: I believe strongly, that the most important thing you can learn at university—of course, you're being taught lots of great stuff—but the most important is how to learn, and how to love and enjoy learning. You'll never find "just one thing" you love and do that for the rest of your career. The industry reinvents itself every couple of years. If nothing else, it'll just be Apple doing that. You have to be adept at learning and remastering new things. That's the path to success in your career, whatever interests you dig into.

Q&A session

Q: Five books, should they be read in a specific order?

A: Scope and Closures is in most demand and chronological release order is certainly OK. The first three are about the core of JavaScript. Four and five will be build upon the first three, but mostly deal with new things coming to the langauge as of ES6.

Q: How important is free and open source software in your work?

A: Everything about my career is open source. I believe very strongly in the power of open source, and its position in the future success of our industry. If you study the history of technologies, they start closed/proprietary, are shepherded through adoption and evolution, and eventually end up open. Ultimately, open always wins. But increasingly, I believe open should be the default mode. Many people say, "I don't feel like I wanna put my stuff out, they'll make fun of my crappy code..." And when I write code, people say "you just have more confidence 'cause you're good." But if you look at my old code, there is some terrible stuff in there. When I say "you" in "You Don't Know JS", that's a collective term. I don't know it either.

Every time I start writing code for a project, I start with an empty file, publicly, on GitHub. I do the best I can and am constantly evolving. But instead of just using GitHub as a platform for marketing my own code and ideas, I assume that every line of code I write is the worst, and the only way to get better is with the help of others. Open source collectively makes the best software better than any one person can make.

It is a culture you should strive for individually, and professionally. I believe very strongly, "open" is the reason why all this exists, and why the stuff we're doing now will still exist 10 years from now.

Q: I'm in that camp where I am afraid to code publicly. Where do I start?

A: My perspective—and there are different answers—is to seek out others' projects. There is a lot of FOSS contribution that isn't about code. Docs are usually left to the end of a project, and are neglected, but it is critically important they are up to date. If you can read others' code, and add details, examples, or tests, that is a super important contribution you can make. Many of "the rockstars" in FOSS got there by just pitching in and started with docs/tests. Some projects go the extra mile, and identify "low hanging fruit" or bugs that are known to have simple solutions. It is a great place to start with, and you can learn about how the project works. Even providing bug reports is a way you can contribute without writing your first line of code. But even one line of code is important. Someone after you can learn from it.

Q: Where?

A: GitHub is the de facto standard. Any community is fine, sure, and I wouldn't say "pick this project." You should pick a project that is interesting to YOU. If you are into  data visualizations, get into D3. Find what you are passionate about. If you do, you'll quickly build your confidence, and that will create a virtuous cycle of making both the people and code, better.

Q: You said that you think JS will be the "only language for the web" for our careers? I'm not necessarily a supporter of Dart, or other similar languages, but do you not expect those to succeed?

A: Great question, and loaded, but... Dart isn't going to succeed in replacing JavaScript, not because it is bad or poorly designed, but because of how Google is going about it. Going beyond what they say on their site, they've positioned it to compete against JS, in hopes of replacing it, rather than being a language that experiments with things to intentionally inform and influence the future of JS. From the original "leaked memo" where the world learned about Dart in terms of "fundamental flaws [in JavaScript] that cannot be fixed," to the Dartium VM they're building in Chrome to sit alongside JS, to the Dart2JS transpiler—the messaging is unclear and smells of not just being a "better compiling JS lang," but more an attempt to hope JS declines if developers can just write Dart in the web natively. I can tell you this for sure: Mozilla will never implement Dart in Firefox. Unless there is a future where Firefox doesn't exist, which I cannot imagine, Dart will not replace JS.

In a bigger sense, there are hundreds of languages that you can compile into JS. You want to run your code on the web, so can "transpile" it into JS. I don't like most of those languages personally, but they are all super important! Source code, is not for a computer! There are an infinite number of ways to write code to produce the 1s and 0s. Source code is for the developer, and you need to find the language that works the best with your brain. Also, we need more experimentation, and more Compile-to-JS languages, like CoffeeScript, which influenced many great things being added to JS in ES6. The future, I think may be limited for CoffeeScript itself, but that's OK because it was very important to evolve JS forward. As far as Typescript, I don't like classes, but Eich is on record saying there may be something like the type annontations in the future of JS.

Learn JS first, but as you go about your career, you'll find other languages that work better for certain problems or teams. Many people do that because they don't wanna learn JS, but that is the wrong way of going about it. Once you really know JS, then it's totally OK and healthy for you to find other languages that you prefer that will use JS as their compliation target. That's great for the future of the web platform.

Lead Image: 
Javascript code close-up with neon graphic overlay
(10 votes)
Add This: 
Article Type: 
Default CC License: 

If you write code, this is your golden age

This is a partial transcription of the two keynotes from day 1 at the All Things Open conference in Raleigh, NC held on October 22nd and 23rd.

Keynote from Jeffrey Hammond, VP & Principal Analyst at Forrester Research

If you are a developer, I cannot think of a better time in the history of our industry to be a developer. It is a golden age if you write code. If you have people work for you that write code, there is another part of that message. You need to understand how open source is part of that process or you risk being consumed by generational changes happening in our industry.

When I built my first company in 1999, it cost $2.5M in infrastructure just to get started and another $2.5M in team costs to code, launch, manage, market, and sell our software. So, it's not surprising that typical "A rounds" of venture capital were $5-10M.

If you look at where we are today, ideas cost about 90% less than what they cost when I got started in this space. Today, omni-channel clients, deployed on elastic infrastructure, aggregate discrete services, use managed APIs, integrate open source software, employ devops techniques, and focus on measurable feedback.

Open source is so ubiquitous. Anyone want to go talk to a purchasing officer when they want to spin up another node? That is driving the adoption systemically. We're seeing the evolution of DevOps, and open source is a driving force for modern application development.

We ran a developer survey at Forrester; 700 developers across multiple countries. We asked: "What classes of open source software have you used in the last 12 months?"

  • 41% open source databases
  • 38% operating systems
  • 34% web servers

When we look at developers that built cloud, mobile, or big data, the response rates are at near-unanimous levels:

  • 93% cloud
  • 92% mobile
  • 78% big data

1 in 5 developers have not used open source software. Even the folks using Microsoft and Oracle are using open source software in some way.

In the past, we've seen servers being the big one. Now it is the open source databases. We're starting to see a shift in the DB domain. We've seen a tail off in app servers. As we see more modern development, they don't use app servers but change how they run a backend.

When we look at modern applications, we see differences from previous generation. We see APIs everywhere. Developers today look for services and APIs first. There are more services now to use and call asynchronous communications. For the last 12 years, we've been locked into a MVC (model, view, controller) world. Tightly coupled MVC architectures. That doesn't work as well now. You see more event-driven frameworks now. Many of the old frameworks are as well suited for today, and the new ones, are open source software.

Lightweight process communication frameworks such as, Nginx, and Node.js are replacing previous open source software and tech. It isn't just open source versus proprietary, we're seeing open source versus open source, and substitution happening. (i.e. Subversion versus Git). In memory, databases are becoming more and more popular and have broad adoption. Elastic infrastructure is the norm. No more "max" licenses.

We are seeing an emerging dominance of sharded SQL or NoSQL databases for open source options in that space.

Behind modern apps, we see "modern engagement architecture." Systems that are built on top of systems, going to employees or customers. Web apps are building in a very different way; fourth tier architectures. In between, there is an aggregation tier, that gathers and ingests data in real-time, from the IoT (Internet of Things), and other sources, and predicts what is the next best step via context. After that, a delivery tier, with things like Amazon Web Services.

Engagement platforms such as Netflix, can be decomposed, you can see all the pieces—client tier, delivery tier, aggregation tier, and services tier—all with open source sprinkled throughout. The same is true of Evernote, Instagram, and Untappd... you can see the pervasive use of open source in modern application architectures.

Increasingly, that innovation is driven by open source software communities. Collaborative collectives set to drive innovation in the industry forward. Those of us looking to hire developers have to understand how to restructure to work with and build on top of the work of those collectives, to attract talent.

I've been asking the question: "Do you write code outside of your day job?" And, 70-75% of developers say "Yes." Some only a couple of hours. Some as much as 11-20+ hours a week off the clock coding. That desire to write code on your own time is driven by a number of motives—learning, starting a company—but those motivations are intrinsic, it makes the developer feel good and feel happy. 1 in 4 developers tell us they contribute to open source software as part of their own time. Those developers are some of the most talented and creative developers out there. If you think of development as a creative field, then you know it is widely distributed. If you are looking to hire talented developers, productive developers, there is a correlation with those that do FOSS (Free and open source software), and those you want in your organization.

There is such a lack of top-talent in our space, everyone is looking for folks who understand modern frameworks and NoSQL, and can build on top of the cloud. If you have those skills, and know how to use the frameworks to build apps, there is a very bright future ahead for you. In the US, we expect developer growth of over 28% until 2020. In 2014, they had a $92K average salary, and in 2013 it was $70K. This shows there is a demand/supply imbalance.

We are in a generational tech shift. Modern tech is different than client/server applications. We gotta understand how to use this tech, and elastic architectures that allow us to innovate cheaply. The cheapness of open source is a perfect fit for modern platforms. 4 out of 5 use open source, and it works.

Open source software projects drive the collaborative collectives, whether they hang on GitHub, or in Drupal, or in foundations like Eclipse or Apache. These are the centers of gravity of development moving forward into the next decade, and the center of gravity grows.

Talent is a seller's market, and we are in a golden age.

Keynote from Dwight Merriman, Executive Director and Co-founder of MongoDB

Hi everyone. I'm Dwight. I work at MongoDB, as mentioned, and I've been there from the start. I wrote a lot of code in the version 1.0 days. I wanted to talk this morning about something consistent with what Jeff was talking about.

Pre-object oriented programming. Pre-relational databases. It goes back a long way. I'm going to focus on the data layer. I do like that term "modern application" because we're not building the same things. It's not "we need a new version of inventory system" but an entirely new class of apps, B2B, and B2C that didn't exist before. The way we build them has completely changed. The schedule this morning, you see all the talks, and they are peppered throughout with the names of projects and products, and lots of open source things. One thing I was thinking about was how Jeff talked about elasticity of licenses. In my mind, it is a big deal for "granularity" of software. It is easy today to mash up third party code that is from completely separate products. You can imagine using a dozen third party components or more. It can become a very long list.

If it was closed source, it would be hard to have that many different things to go buy, evaluate, and develop. That granularity aspect is very real and a big part of how we write software today. We don't want one big monolithic system, we want to break up what we can. Different specialized pieces is what we want, and that isn't as easy if it's not open source.

In that context of modern apps, I wanna talk about data. "Big data" and "NoSQL"—these are weird and imprecise words. There is something big happening. We're in the midst of the biggest change in the data layer of IT (information technology) in 25 years. There is something big going on. Big data is a bunch of new technologies in the data layer that are happening right now; and unusually large amounts change. A boring but accurate way to describe this is, you can divide out the buckets. NoSQL, scalable and great for building modern apps. Hadoop, more on the analytics side.

All this is happening, and it is a big change. The types of apps we write is changing too. In regards to that, the new apps and usecases we work on, the shape of the data is different. The shape of the data. The data is unstructured, polymorphic. It isn't just tabular. Accounting data in the real world is tabular. 1-to-1 mapping. Real-world usecase data, is all varieties of shapes. Unstructured on one side, and complex, or plastic and evolving on the other. The new tools are great at dealing with this. JSON, and the document-oriented notion of the database, or message passing in that format. This is significant in my mind. Saying "unstructured" data is inaccurate. Usually it has structure, but it's dynamic. It is like the age of dynamic/static-typed programming languages. The data we are dealing with is dynamic, and I like to think of it that way.

I like to think about the size too. Massive scale. How to deal with that. Computers I have today are cheap, and not very tall. They scale horizontally. In 1999, we spent $100M on hardware... we were serving lots of ads, but the computers were 1000 times slower, and they cost more money back then. You could get a "big" computer back then, but they were just taller. Today, you can't get that much faster processor by buying a "bigger" computer. You gotta scale horizontally. Parallelism. That dovetails perfectly with cloud computing, and the imperative to scale out, and this is really the only way to do it.

The main thing I think about is latency. The notion of the application and use of nightly or daily reports in your mailbox... that is old-school at this point. People are used to real-time interactions in the services they use on their phones. As developers, as we build systems, we gotta start from a real-time mentality. I would tell my team, of the 100 apps we build, default to real-time. As a CIO, that may sound obvious, but that is a big deal! Think back to Computer Science education in the 1980s. You are supposed to default to "batch" in 1980, batch algorithm complexity was lower, and the way to go. Much faster than real-time. Today, things are so much faster, and can handle it.

If my delivery mechanism was the mail, or FedEx, then batch processing wouldn't matter. One of the properties of modern applications, and something we gotta do. Anytime I do a new app, I default to real-time, and not only when there is a reason not to. Many of the new tools, NoSQL, facilitate that too.

Shape, size, speed, approach

Approach is "how we write code," and that has changed a lot. It is no longer architect, code, design, test, deploy. There is just constant iteration. On the developer side, it is helpful. And for the business side, it is great to be able to say: "Is this what you want?" and it can be changed rapidly to converge on needs if it isn't. It has always been hard to spec things. This opens doors to creativity. Even if I could write a perfect spec, I can have ideas as soon as a month in! You get a better outcome. That is a big part of apps today.

Look at Facebook. Everyday the site changes. Daily. For years. Iteration. That is a great real-world example.


I'll give some specific examples. The usage of MongoDB, when I'm asked is "general purpose database" or "operational database." We are trying to take a new cut at what is an operational database in the context and methodology of today. IoT is going on, and this is one place where people use NoSQL, but people are doing lots of things. John Deere is doing some neat things with data these days.

One of John Deere's messages is: "Feeding the planet"

Imagine it is harvest time, and we're running the combine, and as we go around, you can measure yield. You can measure how many pounds of stuff went into the bin in the last 1-5 seconds. You know where you are by GPS. We can have a yield map, topologically, and very precise. Of course, that is a lot of data. What can we do? We can look at it, and visual representations of it, like heatmaps. You can see problems like "the river is over here." You can imagine a very granular—something the size of this podium—being low yield, and using point-by-point fertilizer! That is the kind of stuff they are working on, and these are real-world tangible things. Candy crush is "cool" but it is intangible. This isn't just in the mental domain.

One of Bosch's messages is: "Connecting The Planet"

They have a business unit that does lots of software for automation and industrial management. They have drills or riveters—a power tool like the one in your garage—except these cost $15K and put rivets in airplanes. Safety is critical in airplanes. There must be quality control. Are they going in right? We can look at it, rivet by rivet? You can see if there are three mediocre rivets in a foot, which is no good, but over 100 feet, that could be ok. The gun records all that data, and it becomes part of the quality control of the manufacturer. In these cases, you gotta store a lot of data. That is what makes this interesting, the shape of the data. The timestamps that have nuggets of info. The polymorphism. All the sensors, not all of them the same, and how well you can deal with those differences.

One of Edeva's messages is: "Safeguarding The Planet"

Think about self-driving cars, which generate tons of data. You can use that data to do analysis to improve traffic flow, and figure out where to change something, or add a road, and what is the problem. You can use it for safety analysis. They've built a technology when you are driving across a bridge in Sweden, they see your speed. If you are going the speed limit, it does nothing. If you are going too fast, the system creates speedbumps just for you! It creates depressions in the road! If there is an accident on the bridge, you can ruin traffic for half-a-million people in a day. It has been a very successful project for them. I have mixed feelings about not being able to speed... but that is good for them.

(audience laughter)

Looking at these examples, you can get broader than that, but the question I would put out is: "What are you working on?" Maybe not this second, but over the next year. Are there intersections, or creative things? We've seen great things come from startups, like Uber. But I've also give you some examples here from more "classical" organizations. There is more to do in the "old" domains and fields, and we have the tools now to adopt the new mentality. Be ambitious, try to tackle these things.

Lead Image: 
(10 votes)
Add This: 
Article Type: 
Default CC License: 

Head of Open Source at Facebook opens up

What is seen hereafter is a partial transcription of James Pearce's OSCON session Rebooting Open Source at Facebook.

For hundreds of years, open has trumped closed—sharing has trumped secrecy.

In a humble way, this informs our program at Facebook. We have 200 active projects at Facebook, with 10 million lines of code. Many hundreds of engineers working on these, with over 100,000 followers and 20,000 forks. We contribute to a wide range of projects (i.e. The kernel, mercurial, D, etc). We've even open sourced the designs of our data centers and machines in the open computer project. We want to share a collection of things we've learned along the way.

Why is this so important?

The reason, open source is dorm-room friendly. Our roots stretch to a young undergrad in 2004 who picked the FOSS (free and open source) software that was available, the classic lamp stack. Our capacity to participate in communities to make a better place has increased.

When we find a piece of open source software (OSS), we first try to scale that, and then find the limitations of a project. So we try to improve them and make them work in scaled environments, and we see this pattern happening over and over again. Mark's decision to use PHP, for instance, had limitations. We built the HipHop "compiler" HHVM project, and even more recently, the PHP enhanced language called Hack, launched back in March. Data, web, infra, front-end, all of our technology stack. It is closely aligned with our hacker culture, and how our organization was perceived. We asked our employees...

"Were you aware of the open source software program at Facebook?"

  • 2/3 said "Yes"
  • 1/2 said that the program positively contributed to their decision to work for us

These are not marginal numbers, and I hope, a trend that continues.

A large number of those people said their experience using our projects in the open helped them get ramped up prior to being hired. That is a huge win for our company.

This is important part of why open source is valuable to our company. And you need to be able to articulate the value.

#0: Always articulate the value FOSS brings to your company

There are always costs and investments, so understand what your return is. Naive ideology only goes so far, you need data to support continuation. We're confident it helps us do a better job. It helps us keep our tech fresh, justify architectural decisions, bring more eyes to our code. Open source is like the breeze from an open window; it keeps things from going stale.

But, if you wind the clock back a year, you'd find this three20 project, which has been discontinued... Our PHP SDK... deprecated. Our fork of Memecache, with a description of "test" and commit messages of "5" "6" and "7"...

*audience laughter*

This is the "throw it over the wall" syndrome. We're guilty of this, I'm sad to say, and it is almost worse than not doing it at all.

You need to continue to care about the things you release, or how can you expect others to care about them?

#1: Use your own open source

It is essential to continue using the version you release. Don't create internal forks, keep the code fresh, keep working on it. The community will notice if you don't. Eat your own dogfood.

Sometimes you'll have to integrate your open source code with closed/proprietary tools internally. It usually means you create plugins, or adapters, and make architecture decisions that make your project better. Presto, we needed it to integrate with both open and internal databases. We had a strong plugin architecture, and plugins for open databases, and then plugins for our internals.

Nevertheless, we weren't doing that well last year. We decided to refresh our team and get our house in order. At that time, our web team open sourced React at JSConf. React is one of the most exciting projects in the Javascript world in the past years, with a great community response. It reminded us at Facebook that we knew how to get great projects out there. That initiative came from the developers themselves. There was no promotional team internally; they came directly from engineers.

#2: Decentralize project ownership

Make sure the engineers are the sole custodians. External engineers work with internal engineers directly. No monolithic structure. As we looked at the reboot, we needed to figure out what we already had, and getting the portfolio under control.

We needed to answer 3 key questions:

  1. Which projects did we own?
  2. Who contributes?
  3. How healthy are they?

Most were on Github. Github of course had a great API, so we wrote a script (in hack) to access and enumerate over projects, and get:

  • every repository
  • every commit
  • every pull request
  • every issue

So we stored all this data, and put it into MySQL.

I love Github, but I find it easier to use SQL to filter what is going on. We found some things to address. We realized we could do this import process again and again, and see how trends evolve over time. I am now one of the world's experts in the Github API throttling mechanism, and we've got it running very efficiently. All of this is to implement two things: instrumentation and publishing.

#3: Invest in instrumentation

We now have time series data and can create metrics. This is Argus and shows the total number of watchers over time. Up to 100,000 followers, and polling every minute, we can watch over time, and we can find inflection points which GitHub didn't have. We launched an iOS library called Shimmer, and then tweaked it, and those surges can be seen after investing in the iOS community. Being able to monitor and publish data and progress, it shows that we are being disciplined, and can get respect via empirical data.

We have over 35 metrics we follow.

Five most important metrics:

  • Average number of Followers
  • Number of Forks per repository
  • Average Pull Request age
  • Average Issue age
  • Number of External commits

#4: Invest in tools

Mostly internally, to help teams run projects. These are internal dashboards, visible by everyone at company. Everyone is aware of the metrics we follow internally. "Big" views on all projects, which ones are doing well/badly, and you can drill down and see the owner for each project. They are clearly defined, an employee, and can assign tasks to directly. I can hassle them, and also, if they ever leave (as it did with Tornado) we can find new stewards for the project. For tornado, we transfered ownership to the community. We have engineers associate their Facebook profile with Github profile via oAuth. We can then track who contributes, whether internal or external. This workflow unlocked so much valuable data about what is going on.

#5: Establish ownership

Don't let projects be orphaned, or flap in the wind. We can show graphs/metrics scoped to projects or teams. Individual teams set quarterly/semesterly goals for themselves often. That social pressure helps projects do well.

#6: Gamification of good behavior

We have teams competing now. React and the iOS pop project have about the same number of followers, and there is a bit of a space-race to get the most followers. In the absence of managing projects directly, you can influence projects. We don't want engineers spinning wheels with lawyers, wasting time. We want them to do it with discipline.

  • How core to Facebook is this technology?
  • Who will use it, who is it useful to, how valuable is it?
  • What else already exists, that is similar to this technology?
  • Is there anything novel in the project?
  • Does it include third party, including third-party open source?
  • Who will maintain the project, accept contributions, and liaise with community?
  • Where/how should project be distributed?
  • What is public release date?

We have a very strong template for licensing, we stick to BSD, occasionally Apache, or Boost, and the only reason we'd look at other licenses, is when target community has a strong culture of using that license. We don't impose licenses unfamiliar to a community.

#6.5: Choose your lawyers wisely

We have a linter to make sure license headers are all there, and everything is good to go, in a private repository. Then we release mid-week, tweet about it, do a Facebook open source social media blast, and then post it to the code blog. Then the social media magic takes holds, and we get good momentum on the first day. We have an internal group of 600-700 employees interested in FOSS.

Every Friday, Mark gathers everyone at 4pm for Q&A. At the start of the session, Mark talks about new apps/products/releases. He's taken to announcing our OSS projects in these meetings, and you can only imagine how motivating that is. Knowing the CEO is aware of a project, and announces it to whole company. Much comes from Infrastructure teams, and that is a huge boost for them. I get a huge surge in interest after Mark talks about them.

#7: Launch is only step zero

You have to know how to continue keeping it successful. I look at the number of followers over time. We can see the bumps of interest over the first week, and a gradual slope over time. It is the gradient of the second half, not just what happens on the fist day.

Some exceptional cases:

  • fb-flo and origami beat this curve; flo was released at a JavaScript conference, tripled their community; face-to-face PR hugely grows FOSS success
  • KVO Controller did two week intervals and saw strong growth after each session; practice makes perfect
  • our climax was the release of Pop, which blew everything away; got 4,000 followers on the first day, 6,000 in the first week, and is way north of 7,000 now

Obviously we benefit with reputation, but the success was built on the success of previous iOS projects. Pop had a closed beta for two weeks before launch. Out of the gate, we had a strong pick-up. Our closed betas were best advocates, helped early growth. The reaction from the iOS community was strong.

We encourage major projects to have their own website. Our design teams have built entire sites for Origami. It shows you care, and take care of your project.

We have IRC, Facebook groups/pages, meetups, and hackathons. It all is important; and it all works.

We have one technique, called a community round-up. The React.js team will gather all the mentions, all the projects, all the demos/presentations, and then shows them to the rest of the community, not just at Facebook. This gives extra authenticity.

The first couple weeks of external commits are vital! In the first day, you'll get a swath of PR'S, most will be typo fixes in documentation. This is not a bug, it shows that people are feeling comfortable.

#8: Leave breadcrumbs

Docs, unimplemented features, to-dos. As projects go on, they change their destiny. There are many paths: Snapshot, Upstream, Flythenest, Deprecate, Reboot.

Snapshots: usually read-only, academic exercises; many are created to get upstream FBThrift is a good example of this

Upstream: we teamed up with Twitter and Linkedin to get changes upstream in WebscaleSQL

Flythenest: project goes on to become "it's own thing;" some of our major projects will have this, and then we'll eventually become "just be a user" like everyone else

deprecate: project served useful purpose, and finishes

Reboot: project starts over again

#9: Understand OSS project lifecycles

We launched 65 new projects in the last couple months. That's about 2.5 projects per week. It is more about quality than quantity, but each has a goal. The is a variety of types of projects; mobile, infrastructure, and programming languages. All are very broad.

MetricsJune 2013July 2014Total Repos129202Followers50.1K97.6KForks11.8K20.7KPull-requests1400 (502 days)1973 (208 days)Issues404 (323 days)427 (186 days)Commits30.7K42.4K

#10: Be open and connected

It has been a pleasure to share our journey with you today.

Q: In the Facebook license, it looked like "for more information."

A: Straightforward BSD license, and a patent grant. We have a patent grant for the developers, same as what happens in the Apache License.

Q: Does Facebook have a Contributor's License Agreement (CLA)?

A: We didn't have slide for the CLA, but it is basically the Apache CLA. It is so users that contributions that came from external contributors were theirs to give. We then have a bot that comes around to do a Github auth. Exactly the same as the Google/Apache process.

Q: Have we open sourced the GitHub scripts?

A: I knew someone was going to ask that! We'll share as much of that as we can soon.

What is your Background?

Name:James PearceTitle:Open Source Program

Been in the tech industry for years, mostly in mobile. Worked on early early mobile tech, when it was called "WAP." I've been waiting for it to become the next big thing, and it finally has. I joined Facebook about three years ago, working on Mobile Developer relations, talking about app integration.

When it came to open source software, it was serendipity. We saw it needed love, and here I am. I'm still learning a lot as I go along. We try to federate as much activity as we can, and make it as light touch as possible. We're doing better than we were, but we've got a long way to go. We've got lots of projects, but we want to do more, work with more communities, and think more about how we provide stewardship over time.

How do we do more in mobile? We have lots to offer in Android, and we want to continue to run the program as efficiently as possible.

How can people get involved?

Check out our careers site. All our open source projects are on GitHub, we're friendly, and we're responsive when people send pull-requests.

This derivative work by Remy Decausemaker is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Lead Image: 
Neon sign: Internet
(7 votes)
Add This: 
Article Type: 
Default CC License: 


A Tribute to Fair Use

Unless someone like you cares a whole awful lot, nothing is going to get better. It's not. Freedom is never more than one generation away from extinction. It is not ours by inheritance; it must be fought for and defended constantly by each generation.

Freedom is the right to question and change the established way of doing things. It is the continuous revolution of the marketplace. It is the understanding that allows [us] to recognize shortcomings and seek solutions.

In the long history of the world, only a few generations have been granted the role of defending freedom in its hour of maximum danger... I do not believe that any of us would exchange places with any other people or any other generation. The energy, the faith, the devotion which we bring to this endeavor will light our country and all who serve it — and the glow from that fire can truly light the world. He who lights his candle at mine, receives light without darkening me and the life of the candle will not be shortened. Happiness never decreases by being shared. Let your light shine before men, that they may see your good deeds. You and I have a rendezvous with destiny. We will preserve for our children this, the last best hope of man on Earth, or we will sentence them to take the last step into a thousand years of darkness.

There’s a battle going on right now, a battle to define everything that happens on the Internet in terms of traditional things that the law understands. New technology, instead of bringing us greater freedom, would have snuffed out fundamental rights we had always taken for granted. With malice toward none, with charity for all ... let us strive on to finish the work we are in, to bind up the nation's wounds ... to do all which may achieve and cherish a just and lasting peace among ourselves and with all nations.

Information is power. But like all power, there are those who want to keep it for themselves. The world’s entire scientific and cultural heritage, published over centuries in books and journals, is increasingly being digitized and locked up. You can’t share it, because your best way of making money from it is to own it, and to keep other people from it. That may be okay [for] real estate, but is it okay [for] knowledge? That may be okay [for] diamonds, but is it okay [for] culture? That may be all right when what it means is that the poor don’t have Lexus, but is it okay when it means the poor don’t have physics?

No man can put a chain about the ankle of his fellow man without at last finding the other end fastened about his own neck. For it is the common good and not private gain that makes cities great.

Do not use your freedom as an opportunity for the flesh, but through love serve one another. Clothe yourselves with compassion, kindness, humility, gentleness and patience. Things like freedom and the expansion of knowledge are beyond success, beyond the personal. Personal success is not wrong, but it is limited in importance, and once you have enough of it it is a shame to keep striving for that, instead of for truth, beauty, or justice.

Think deeply about things. Don’t just go along because that’s the way things are or that’s what your friends say. Consider the effects, consider the alternatives, but most importantly, just think. Whatever is true ... noble ... right ... pure ... lovely, ... admirable excellent, or praiseworthy—think about such things. Do not conform any longer to the pattern of this world, but be transformed by the renewing of your mind.

Once you change your philosophy, you change your thought pattern. Once you change your thought pattern, you change ... your attitude. Once you change your attitude, it changes your behavior pattern and then you go on into some action. Action is the antidote to apathy and cynicism and despair. You will inevitably make mistakes. Learn what you can and move on. At the end of your days, you will be judged by your gallop, not by your stumble. It’s all part of the process of exploration and discovery. It’s all part of taking a chance and expanding man’s horizons. The future doesn’t belong to the fainthearted; it belongs to the brave. Never, never, never, never - in nothing, great or small, large or petty - never give in, except to convictions of honor and good sense. We have the power to decide the fate of our planet and ourselves. This is a time of great danger, but our species is young, and curious, and brave. It shows much promise. It ought to be remembered that there is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, than to take the lead in the introduction of a new order of things. Whatever you do, work at it with all your heart. May the Force be with you.

The Tributed

Dr. Seuss, The Lorax, (1904 - 1991)

Ronald Reagan, California Gubernatorial Inauguration Speech (5 January 1967)

Ronald Reagan, Republican National Convention Annual Gala (3 February 1994)

John F. Kennedy, Inaugural address, Washington D.C. (20 January 1961)

Thomas Jefferson, Letter to Isaac McPherson (13 August 1813)

Siddhartha Gautama Buddha (c. 563 – c. 483 BC)

Matthew 5:16

Ronald Reagan, A Time for Choosing (27 October 1964)

Aaron Swartz, F2C:Freedom to Connect 2012. Washington, D.C. (21 May 2012)

Abraham Lincoln, Second Inaugural Address, (4 March 1865)

Aaron Swartz, Guerilla Open Access Manifesto (July 2008)

Eben Moglen, Plenary session at Nonprofit Technology Conference in San Francisco, (28 April 2009)

Frederick Douglass, Speech at Civil Rights Mass Meeting, Washington, D.C. (22 October 1883)

Niccolo Machiavelli, The Prince, (1469 - 1527)

Galatians 5:13

Colossians 3:12

Richard Stallman, Znet Inteview (18 December 2005)

Aaron Swartz, UTI interview, (23 January 2004)

Philistines 4:8

Romans 12:2

Malcom X, Speech at the Congress for Racial Equality, in Detroit, Michigan (12 April 1964)

Bradley Whitford, Spring Commencement at University of Wisconsin, (15 May 2004)

Ronald Reagan, Speech about the Space Shuttle disaster (28 January 1986)

Winston Churchill, Harrow School, (29 October 1941)

Carl Sagan, Cosmos: A Personal Voyage (1990 Update)

Niccolo Machiavelli, The Prince, (1469 - 1527)

Colossians 3:23

Han Solo, Star Wars, 1977 on reaching the next 100 million computer scientists (SIGCSE keynote)

The following is an adapted transcription from the keynote address given at the 2014 SIGCSE conference by Hadi Partovi, founder of


Thanks for the warm welcome!

Last year, SIGCSE (Special Interested Group on Computer Science Education) was a week after our launch. It questioned our motives, and existence. We made a video, and that that video got 12 million views, so I built an organization around it.

This year, I was welcomed very warmly. I was up until 3 AM last night, which helped remind me that Computer Science educators are not only the coolest, but also the most innovative.

For those who aren't familiar with, there are lots of opinions of what we are. Lots of people think of us as a marketing org that makes videos of celebs, a coalition of tech folks filling Computer Science jobs, a political advocacy organization for educators and technologists, the organizer of the Hour of Code, a software engineering house, a curriculum writing team, and a grassroots movement to bring together for-profit, non-profit, and Governmental organizations on a united goal.

Our vision: Computer Science taught in every school to every student. Not required necessarily, but definitely the opportunity to take Computer Science available to all.

Three things we talk about

  1. Bridging the student/job gap. Politicians think about this a lot
  2. Reaching underrepresented students, and the social justice implied in doing so
  3. Computer Science is foundational for the 21st century. Doctors, lawyers, even the President of United States, need to know about this field

As early as 15 years ago, I asked our Dean why don't we do these things, he said "give it 15 years, it will fix it self." But it didn't...

The myths

  1. We're all hype and only do Hour of Code. In fact, we have 15 staff at this event alone and we're always working.
  2. We want to do everything by ourselves. In reality, we have over 100 partners who help us do everything we do.
  3. We are only about "coding" or learning to code. This is probably due to the name mostly, but if we called ourselves "computer science for the masses," our URL would be, which is just as confusing as any other acronym.
  4. People assume is about the software industry coming to tell schools how to do their jobs. I have a Computer Science background, and I've gotten software companies to fund, but that doesn't mean they run it. Much of our money is spent in Kindergarten on kids who will never become computer scientists.

The difference between computational thinking versus programming for CS is clear to all of us, but for the average person it may be jumbled. We don't just want people to learn to code, we want people to learn to think. We are disrupting things—not the natural order, but the previous order. We want to disrupt education in a goodway.

The pillars of

  1. Educate: Bring CS to all K12 schools in US. This is the biggest job we're trying to tackle. Making curriculum, and working within school districts.
  2. Advocate: Remove legislative barriers, make CS part of core academic standards.
  3. Celebrate: Combat stereotypes that prevent more students from joining in CS.

Hour of Code highlights

  • 28 Million students, in 35,000 classrooms
  • 48% were girls! (huge applause from audience)
  • 30 languages, 170 countries: many volunteers helped translate
  • Insanely high ratings (97% positive vs .2% negative). This was our first time doing something "real" in public schools, and this rating came from teachers. 75% gave it a 5 and 22% gave it a 4.
  • 20-hour K-8 introductory course: 800,000 students are participating in 13,000 classrooms; 40% girls!
  • School district partnerships: 23 districts, including the #1, #3, and #6 in the US. We've held professional Development workshops this summer for ~500 teachers from K12.
  • State advocacy: we changed policy in five states, with eight more on deck!
  • A lean team: we hired eight full-time staff

How did we do all this? Partnerships across industry, non-profit, government:

Pillar #1: Educate has a developed a full K-12 curriculum:

  • 20 hour modules all the way to middle school
  • Aligned to Common Core standards in math and ELA
  • Middle school modules go into Math and Science classes. Teaching Math science via Computer Science
  • High school intro course, and a high school AP course has two different models for how we spread.

First is the Online Model, where we're focused on putting more courses and curriculum online for teachers and students. The lower the grade, the more freedom teachers seem to have. 3rd grade teachers teach math, sure, but they are not just a math teacher, and can find ways to integrate code into more activities. This is extremely cost effective, about $0.05/student!

Then there's the District Model, where the district provides teachers, classrooms, computers we provide: stipend, curriculum, and marketing. This helps make sure there is no cost to the school for adopting a Computer Science curriculum. Managing costs for scale: around $5K-10K/high school, $5,000 x 20,000 high schools who don't have CS; that adds up. is looking into developing things like:

  • State level Teacher Certification Exams
  • Incentives/scholarships for studying CS in Schools of Education.
  • Building a pipeline of pre-service teachers

We've found the Holy Grail for online curriculum is to make learning feel like a game. An online curriculum makes teacher's lives easier. This is not about making an "end-run" at teachers; Web-based curriculum reduces the IT hassle Significantly! Most high school CS teachers in this room, also double as the de facto IT person in the school!

(audible "yes" from many and laughter heard around the room)

As long as the IT Department doesn't blacklist us, you can get to our IDE and curriculum. We have a team of engineers working together to blend curriculum and game design. We're still early on in evaluating the results. In the web-world, you run the data through Hadoop and/or Hive, and we've got 10M datapoints.

Some ways people can help


  • Bring a Computer Science Principles course to your institution
  • Partner with your School of Education to bring more Computer Science into the Ed program - ideally a teaching-methods course, or any Computer Science Endorsement.
  • Give support/instruction to the "tech ed" courses at local high schools.
  • Help scale by offering K-5 workshops for us! Email if interested.


  • Convince your local school district to teach CS. ( will enter new regions if 30+ high schools are on board)

  • Help us improve our curriculum ( The Hour of code is behind us now, but we're still getting 1M students to the site every week!

Pillar #2: Advocate

Computer Science is foundational! Every student should have access. Computer Science should be core academic offering in school, not just a vocational elective on the side. takes a broad approach. We make recommendations for states to adopt. For further reading see The ACM Report Rebooting Pathways to Success.

At the national level, we have the Computer Science Education Act, which has bi-partisan sponsors in both houses. It says more or less that STEM funding can be used for Computer Science. It's a highly non-controversial bill. Small amount of optimism that it will be passed, but since this is the most unproductive congress ever...

At the state level, we want schools to allow Computer Science to satisfy existing high school math/science graduation requirements. At the university level we want to make Computer Science count. We want Computer Science to satisfy math/science College admissions requirements. We need universities to recognize the above point.

Where it counts

  • CS enrollment is 50% higher in states where it counts
  • 37% more participation by African American and Hispanic students
  • Calculus enrollment remained unchanged
  • We have legislation on deck for states like: FL, NY, IL, CA, AZ, OK OK
  • We have policy recommendations in the works on a district level in states like: WA, KY, MI, CO, MA

We're going to start a collaborative whitepaper for universities to accept AP/IB Computer Science to satisfy math/science requirements.

Pillar #3: Celebrate

Hour of Code is a hard act to follow. Fastest web technology to reach 15 million users! It took Tumblr 3 years, and Instagram 14 months. We did it in 5 days. 

More girls participated in US schools in Computer Science Education Week in seven days, than the last 70 Years.

(huge applause from SIGCSE audience)

The 2014 Computer Science Education Week is December 8-14. Our goal is that 100 million students try The Hour of Code in 2014. It will require participation by a majority of US students, plus broad international participation. You can help by asking your school to participate, and by buying and wearing swag.

Closing thoughts

Computer Science is at an incredible inflection point. There are people here doing what I've been doing for 10 times longer. If you've tried before and failed, try again. It wasn't easy to get the President to talk about Computer Science, but it was easier than ever before. Leverage the numbers that are now possible.

With shared goals, anything is possible.

After his keynote address, Hadi generously obliged the author with a one-on-one follow-up interview. Below are his responses.

Where are you from?

I'm originally form Tehran, Iran, and now from Bellevue Washington. is based in Seattle, Washington.

Where did you study?

Harvard grad, BS/MS in Computer Science, 1994.

Any clubs/activities outside of school?

Our computer programming team, took 7th in the world in the ACM International Collegiate Programming Competition my senior year.

Why did you start

I think the fundamental reason, is it should happen. Anyone who tries Computer Science and programming, a lightbulb goes on. It teaches creativity and it is powerful. It seems un-American that 90% of schools don't offer Computer Science. I'm living the dream, sure, but it is a dream that 90% of kids won't have access to.

Fewer schools teach Computer Science now than 10 years ago. This lack of Computer Science is breaking the American Dream.

The very seed of was a technology roundtable in December of 2011 with President Obama. As I was listening to myself speak, I thought "no one is going to do something about this..." In March of 2012, I was at a conference with Jack Dorsey and Drew Houston. I talked about making a video, which started as a hobby idea, but as soon as they said "yes" I've been on a path that hasn't relented since.

What are your thoughts on Free/Open Source Software?

Personally I firmly believe that education should be as free and open as possible. The intellectual property around education should be both free and open source. All the curriculum we create will be licensed Creative Commons, and all the code open source. We want the community and volunteers to help us. We get asked all the time "Can you do this? Can you do that?" and the best answer is "Here is the source code, go ahead and do it."

We've made tutorials where we have Angry Birds and pigs, and countries tell us that that is a religious issue. When it is open source, we tell them to change it, and they can.

We have prisons that want to use our stuff, but can't have Internet, so we want them to adapt offline versions of our code.

Those are things that we didn't even imagine when we started, and can happen through open source.

Final thoughts?

Even if you are not an engineer, it takes five minutes to see how fun the tutorials are. Even as an adult, you can learn to code too!

If you want to get involved, is the place for people who want to help us.


This derivative work by Remy Decausemaker is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.


Lead Image: keynote address
(5 votes)
Add This: 
Article Type: 
Default CC License: 

Hacking computer science education at Khan Academy

The following literary transduction is based on a lecture by John Resig given at the Rochester Institute of Technology's (RIT) Center for Media, Arts, Games, Interaction, and Creativity (MAGIC).

I'm deeply honored to be here today. I keep coming back to RIT, every couple of years, and I live in New York City now. There are always new buildings and new things happening. Very fantastic.

I'm here to talk about the stuff I'm doing with Khan Academy. I'm the Dean of Computer Science, which is really a very tongue-in-cheek thing since we can make up our titles. I guess, I'd start with an intro to what Khan Academy is and dig deeper into what I'm doing with CS (computer science) platform in particular.

I joined Khan Academy in 2011 after I worked at Mozilla doing JavaScript tool development. The big thing, our goal, is: bring world-class education to everyone, everywhere, for freeWe are really good at producing educational material. Our math, science, computer science, and art history are very good. Most content is released under a Creative Commons license, so you can take it and do what you want with it.

We're making inroads to bring our content to the world. I've been working on the internationalization of Khan Academy. There are some problems in computer science that are "solved problems," and you'd think doing a website in multiple languages would have been one of these... but it is hard. So many edge-cases... We have videos, articles, exercises, all things relating to our curriculum.

Who here uses Khan Academy's math curriculum as a supplement? (many people raise hands) That's pretty good! We usually tend to skew younger than college age!

Tracking better data

One of the things we do is track all the work you're doing, so you can get a more effective education. Every exercise and video, we track when you did it and how it went. If you got right answers or wrong answers; we can figure out why you're getting it wrong and how to make it better. We have a dashboard where we can push you to work on exercises to move your knowledge forward.

This was the first thing I built at Khan Academy, this framework for easily creating exercises. This is a system where you have a problem, charts, hints, and when you answer stuff, it can steer you when you get things wrong, to give you a better understanding. I worked on the internationalization for this as well, and it was ridiculously hard. Exercises made a lot of assumptions about English and western ways of doing this.

Take "Jane gives one ball to Fred," for example. "Oh, that's easy, we'll just replace the words!" But every language has differing ideas of what is singular and what is plural. Some places have more or less rules and even separate rules if the number ends with a three! We are now available in Spanish and Brazilian Portuguese.

Much of our material on the site is split into a tutorial format. This is where you can go through, watch videos, and do exercises in a very linear form. This is great for our math content, and you see many students at middle-school and grade-school level using this. As I said before, we're tracking lots of data to give students a better experience, but also, the teachers.

Khan Academy is trying to change how we think about education. So we can provide teachers with insights on how students are understanding content and progressing. You can see how they are struggling.

We want to change the "traditional" model where you go in and the teacher gives a lecture to students about a material and assumes that everyone is at the same level. It assumes everyone has equal footing, and often, this is not the case. You have students lagging, and students who are wayyyy ahead. If we had better data, teachers could make more informed decisions.

One-to-one teaching

We break it down for the teacher so they can track specifically what students are working on at a given time, and analyze students in a class or across classes. No longer do they need to give a general lecture. Now it's "Ok, it's math time. You set your goals."

This week a student decides they want to finish long division. They can work on that and watch videos. The teacher can track whether they are getting stuck, and then give a targeted lecture. If there are four students struggling with long division, the teacher can give those four a lecture to reinforce those concepts, rather than doing the same thing over and over again with the entire class. This means they have time to do more one-to-one teaching. They can see who has higher or lower levels of understanding. This is something that provides a huge level of insight to teachers to better use their time.

We have lots of students using this system now: over 3 million problems per day! There are huge spikes in the beginning of school in September—we hit 10 million active students. I'd like to teach on a classroom basis, sure, but this is a way that I can better expand and stretch myself to teach computer science content. Depending how you measure, we have 250,000 to 1 million students doing computer science stuff on the site.

Computer science at Khan Academy 

I started working on the computer science platform in late 2011 and released it in August 2012. We iterated a number of times and settled on this final implementation. The big thing here: this is on the computer science part of the website. We have a lot of curriculum. My coworker, Pamela Fox, is producing all the content (videos exercises, etc.).

We go up to object-oriented techniques, not much overlap with Computer Science 101, and we have a long way to go to be a "full" computer science curriculum. You probably won't be able to go out from here and get a job yet. I want to encourage people, expecially at the target of middle-school and up, in finding the thing that excites them most about programming, and applying it to their life.

Showing people the ways that programming can help improve science, biology, or art. I'd love it if our computer science platform didn't jut produce computer science students, but enabled students to do what they love, with programming to help them do it well. Art historians who can program. Humanities people who can code. Cross-pollination for disciplines that traditionally don't overlap. This is hard, but this is what I'd like to see.

Coding for all disciplines

Who is familiar with GitHub(many hands raise) You can edit code, fork it, upload it, and make changes to the original. Similarly, what we have on Khan Academy, is the ability to make programs. Has anyone heard of Flappy Bird? Someone rewrote this within Khan Academy!

We don't allow outside images yet, so anything you can see here on the computer science site is drawnSo, someone made this game, a student I don't know. It got 700+ votes so far and 1600+ "forks," which we call spin-offs. These are all variations on what has been made. It looks like most of them are about the same. Slight changes in the color of the "flappy bird," stuff like that. Students are taking the code and modifying it. This program has already had over 1000 spin-offs, or programs which were cloned or modified in some way. The top spin-off itself has 113 votes, and over 100 of it's own spin-offs! This is a very different collaborative environment. People making something and learning from it.

Khan's collaborative environment

I wanted to build off of the open source model. I wanted the code to be the front and center and not just show the graphical content, even when it is not a programming exercise.

We have a partnership with NASA, and we're doing lots of simulation stuff. One is making a lander go into orbit and land. It is really hard. Though it isn't required, I wanted to show the code. People who are interested in space can look at the simulation, see it, and say "Hey, how does this work? How can I learn more?" I can go in and modify the simulation, save it as my own spin-off, and make whatever changes I want. That is the model we've embraced here.

Stuff like Minecraft and Cut The Rope style games, those are always popular here. Someone made a drawing program. You can't really save "state," so in the program, at the end, it spits out a giant blob of text, which is what you can copy-and-paste into a program to recreate your drawing!

Students adapt, modify, and change

Students have found ways to work around our system. This type of "meta" programming and activity is what it is all about. Students are even making clubs now. They will hang out and chat with each other in the comments section. Social groups are forming.

Students also tend to make lots of demands and often petition for lots of things. Usually you'll find one at the top of just about every page. One thing often requested is support for playing sound, which I suspect students will abuse plenty. (audience has a good laugh) I like when students can discover things for themselves. What they learn from, it tends to be much more deeply embedded. They take more pride in it.

This has been my baby, real-time injection of JavaScript code, so you can inject "state" into live running code. I feel like it is worth it. You get a much more compelling experience, and you can manipulate things as they happen. You don't have to wait for the program to restart. Even as we we're typing in the code, the program was still running! It works pretty well.

We can record all the actions, and keystrokes, and things being said. We can play back commands and audio. We use videos frequently, but I don't think they are great for computer programming. The thing you want students to do is take some code, pause it, and change things to figure it out for themselves.

We had some really janky solutions where we would pause YouTube videos and regenerate code, but we got to this point. Now, you can pause, make a change, and then when you hit play, it reverts the code, and continues from where you left off.

We also have interactive transcripts now too, so that if you are hard of hearing, or you want the transcript for what has been said, you can get it. And we are now starting to translate those into other languages too (we're working on translating the audio).

All the ability to do this stuff is powered by browser technology, like HTML5. We pull content from Amazon S3, and it works surprisingly well.

I showed you math exercises where a student is given a math problem, they put in an answer, and then they get a "right or wrong" result. That works OK for math, but not really for programming. What works well for learning program is actually writing code. I want the students to be writing out the code. This is one step above recordings. We are basically showing you what code to write, and based on that, they can turn it into their own thing. There are much more complicated programs that can be made from the basics.

One thing I'll mention is that a lot of code is open source and available on GitHub. If you are interested in contributing back, that is all there.

We built a framework this summer called Structured.js. We think it's really cool. You can define a rough structure for the code you want students to write, and then analyze the code to see if it matches a structure. We parse the student code, convert into a syntax tree, and compare it to our syntax tree.

We use jQuery on the front-end, along with backbone.js and a new framework from Facebook called react. It is pretty crazy, and I think you should check it out. I'm slowly getting used to it. On the back-end, we use Python on Google's App Engine. Sometimes we get on 60 Minutes, and the traffic goes bonkers for an hour or two, then it goes back down. Google App Engine handles it quite well.

I think I will wrap up there and answer any questions anyone might have. Questions about what I'm doing at Khan Academy, questions about jQuery of course. (audience laughter)

Q & A session

Q: Does the order in which things are written matter in Structured.js?

A: In this specific case here, it is forcing you to do your 'if' statement before the loop, so yes. A tool like this could be used poorly, sure. And that is why we have such detailed hints. We provide the structure itself as a hint. This is something here, that is one step above a video. They can name their variables whatever they want, we don't care, which is an improvement from most systems like this. This is relatively new, but students love it, and they are using it a lot.

Q: You are doing this teaching in the browser, which is good, but are there any "where to go next?" resources on Khan Academy?

A: We have an article called "what to learn next" where we direct students to tutorials and articles, projects to work on, web development in general, and other language resources. At this point, we are not everything for everyone, but we want to be the launching pad. Here is your first taste, and then you push off from there.

Q: The structure was all S3 and Google App Engine?

A: Yes, all cloud hosted. S3 for file hosting, videos typically pulled from YouTube. We don't have any physical servers. That is the nice thing about being "in the future" now, we don't need any of that. (audience laughter)

Q: What is the history of the name?

A: It was created by Salman Khan. He created many of the videos on the site. We also have other professors and professionals who produce content, but he has done most of it. It started as his YouTube channel to help his cousin learn math. It got bigger and bigger, until it's it got up to like a million people.

Q: How do you moderate all the content? There could be potentially negative things, right?

A: We have a system for people to "flag" content, which puts it into a moderators queue. Once it gets three flags, it gets automatically moderated, and then can be reapproved by a moderator. We haven't had anything too terrible show up yet. It's been running since 2012. Really, it's not the programs, as much as the comments... BIG SURPRISE! (audience laughter) There have been "factions" of middle-school students who battle in the comments sections. We were worried in the beginning, but this hasn't actually happened yet. We banned outside images and only allow images that we provide. We had a student write a program that turns an image into a multi-lined call to the rectangle function to recreate it. If we find a way to ban something, students will always find a way around it. They are mostly doing things like putting up pictures of their favorite Pokémon.

Q: Biggest tech challenge at Khan Academy?

A: The real-time stuff was pretty huge. The internationalization stuff of the past year was really hard, but hard in a different way. Not hard in the "how do we scale" way, but in the "understanding the cross-cultural issues" way. One of the hard problems for his platform, was making it in such a way that young students can understand what is going on. We ran a whole summer school and did lots of playtesting. We'd get a different batch every week. They'd get summer school credit, and we'd get bug reports. We were in there with our notepads, and running everything through JSHint. We were trying to provide errors that were much more intuitive, but this wasn't explicitly a tech problem.

Q: You talked about "clubs" forming in the comments, I was wondering how far you were willing to take the pseudo-social aspect? Are there plans to expend into different languages or other types of programming?

A: Yes, in some areas. One immediately I think of is humanities. I didn't think about this initially, but the way in which you are making these programs and building a creative work, you spend lots of time and you want people to collaborate. This can happen for writing, poetry, art, music, all of these things for which there isn't a model for teaching online. Having the staffs and bars for composing music and hearing the playback in real-time would be great. Even better would be for live-forking and editing for others to collaborate. More immediate feedback on what projects students build is a goal.

The only other language we're looking to go into is HTML/CSS/JavaScript. The huge advantage of JavaScript is that it can run natively, you don't need to pass it back and forth. For now, we'll be sticking with browser native languages.

Q: Khan Academy is free, right? How are you making money right now, and do you see any change in the future?

A: We are funded by grants. The Bill and Melinda Gates Foundation, The Carlos Slim Foundation, and others. There is a lot of interest, and funding, and we aren't in the position of every other education startup—where you're charging teachers, schools, or students money. I don't want to be pinching pennies out of students pockets. For additional revenue, we have contracts for folks who are using our content in commercial contexts.

We do have lots of jobs, tons of internships. The whole computer science platform was built by me and interns, so if you wanna work with me and build cool stuff, please do.


This article is based on work at


Lead Image: 
(9 votes)
Add This: 
Article Type: 
Default CC License: 


This role has many aspects (see: Job Description), and I look forward to finding out more about it during my orientation at HQ in Raleigh.

In the meantime, I've had a lot of folks asking me a lot of the same questions, so I've put together this handy FAQ.


Q: Will you be leaving RIT?

In some ways, yes, in some ways, not entirely (at least not yet.) I will be resigning from my staff position with the MAGIC Center effective immediately, but I will finish out this semester teaching the Humanitarian FOSS Development and Business/Legal Environment of Free/Open Source Software courses. My time on campus will be limited to scheduled class time and office hours until the end of Spring Semester (May 20th is my last final exam.) As the team I am joining at Red Hat includes University Relations, and I have a unique perspective on this first-of-its-kind academic program, I will very likely be kept in the loop on both the RIT-side, and the Red Hat-side.

Q: Will you be leaving Rochester?

No, not for the foreseeable future. The FCL position is a remote position, and will require travel, but I do not anticipate even considering relocation until after FLOCK 2015 at the absolute earliest.

Q: ZOMG! Does this mean the end of the FOSSBox?

Certainly Not! Prof. Stephen Jacobs and a larger-than-ever core of students will keep the wheels on the wagon until a Remy-replacement arrives.

Q: So... when does the cavalry arrive?

Great question! This is an opportunity for MAGIC to reevaluate their needs, and hire a person (or persons I hope) who can support the FOSS initiative, infrastructure, and general operations. I'm hopeful there'll be a job listing in the next couple of weeks, and the process can begin asap. Hopefully we'll see some hiring(s) before the end of the semester, but certainly by fall of next year. MAGIC will move as quickly as the RIT HR process will allow I'm sure.

Q: So, what happens to the FOSS minor? It isn't going to go away is it?

Nope! The minor can't be undone now that it is official and on the books. In fact, we're hoping to double the number of enrolled students by the end of the academic year. When(if?) I vacate my role as an instructor, The IGM department and SJ will make a list of eligible degree holding individuals qualified and interested in teaching these courses at RIT, and will know how best to fill that role. So far as content and instruction, it is my impression that the pedagogical and instructional model of course delivery we've developed with the help of Professors Jacobs, Shein, Sherrill, Bean, and many students/alumni (shout-out loothelion, ryansb) will remain at the core of minor. As a Hackademic and upstream developer of educational software I plan to continue promoting Free/Open alternatives to the wasteland of predatory Academic Software and LMS's variety of quality Academic Software and LMS's I've seen during my time as a student and instructor.

Q: So obviously now you can totally get me free RHEL, right?

No... See:

Q: So, you're going to stop hanging out in #rit-foss?

No way! It just means you'll also be able to find me in #fedora-{devel,design,meeting,fedmsg,*...}

Q: So, you can totally get me a job at Red Hat, right?

Maybe. If you find a position listed on and send me a link to the listing, I can at least help you begin the process. From what I know about FOSS driven organizations, they seem more likely to hire active contributors and community members, so I would highly recommend diving head-first into whatever stack or project you are most passionate about, and applying thereafter. Here is a good place to start:


It was over five long years ago when I returned to RIT for the first time after my undergraduate studies with lmacken, GregDek, and Mel Chua, to meet with SJ for the first time. SJ had orchestrated a jam-packed schedule for everyone to meet with Deans, and Department Heads, and IT Administrators of all stripes. We resolved after that visit to create the first ever Academic Minor in Free/Open Source Software at a university in the United States, and by golly-gosh we did it! It was even the #1 News Story published by RIT University News Services in 2014! It took a village, and a whole network of villages outside of that village, to make the FOSS minor a reality, and now it is done. I've seen full life-cycles of academic careers, from Freshmen to Graduate, start to finish. Every time another student gets a grade in my course, or lands their first internship in FOSS, or gets their first job after graduation, I think of this quote:

If your plan is for one year plant rice.
If your plan is for ten years plant trees.
If your plan is for one hundred years educate children.

As a Hacktivist, I have always felt that Technological Literacy was the most viable long-term strategy. It won't be fast, it won't be easy, it may not even be cheap, but it will be Free. We've got a model now, one that I hope can be replicated in other academic programs, iterated upon, and improved. It took us a while, but for the first time I can say that I can safely walk away. RIT and MAGIC can take the ball from here.

I will be ever-thankful for the opportunities that have been afforded to me in Rochester, and the amazing community that has supported our work and our students. Perhaps most of all, I am thankful for my grey-bearded mentor Steve Jacobs, who has always treated me like a colleague, and been willing to acclimate to the wildwest that FOSS communities can be. He's always been one to bring everyone to the table, and without him, none of this would have been possible. If there were one Steve Jacobs at every university, Open Education would be a solved problem. Much Love SJ.

Students, you know that I'm not leaving as much as I'm swimming upstream, so I don't need to say "goodbye" to any of you ;)


Fedorans, I have been a user and advocate for years, and it is my privilege to represent and serve you. Fedora, for years now, has been the bedrock upon which my portal to the digital realm is affixed (I've been installing, preupgrading and fedup-ing since Leonidas.) The prospect of taking the community development, advocacy, and organizing skills I've been building here, and applying them to growing a community of millions contributors and users in every timezone, for the leading enterprise Linux provider on the planet, is a humbling opportunity one can only dream of... Not all of you know me, and I certainly don't know all of you, but I want to. Your contributions have helped me all these years to get to where I am, and now it is my time to return the favor. I cannot wait to tell your stories, support your efforts, and hack alongside you. There is much to learn, and much to do, and I'm going to need all the help I can get.

Wrapping up the Summer of Code at the Googleplex

Over 280 attendees representing 177 mentoring organizations gathered for a two-day, code-munity extravaganza celebrating the conclusion of Google Summer of Code with the annual Mentor Summit held at Google in Mountain View, California.

Mentors and admins began arriving on Friday night, and walking about you could catch bits of conversation, spoken in a plethora of languages and accents, spanning from pixels to bits. No less than four trips of double-decker bus loads, from two different hotels, shuttled everyone into the Googleplex. The morning began with a hearty breakfast and coffee from Google's expert baristas. With trays piled high with eggs, bacon, muffins, and other breakfast-y goodness, mentors took their seats in the massive company cafeteria. Under a quartet of stage lights in that familiar Google colored glow, Google Summer of Code lead Carol Smith stepped up to the microphone, and welcomed the crowd.

Once folks were acquainted with the schedule of events, places of interest, and policies to follow, Free Open Source Software (FOSS) advocate and Director of Open Source Programs at Google, Chris DiBona, addressed the audience:

The reason you are here is because you deserve to be. The whole point of GSoC is to introduce new developers to FOSS, create more FOSS code, and support projects we think are great. We look at reviews, and the aftermath and say 'did it work?' You are here because it did. Thank you for being there for open source software. Thank you for being there for free software, and for being there for Google. Open source matters to us. The future of open source matters to us. This room, and the people you bring in, without you, it wouldn't be as wonderful in 5-10 years as it is today.

The big reveal

Prior to the summit unconference, attendees had a chance to suggest and vote on session topics using Google moderator. Sessions were assigned to rooms of a size proportionate to their level of interest. Ample space was also provided for sessions that were proposed on-the-spot, often inspired by discussions from previous sessions.

The Pumphandle session

Pumphandle session

Photo by Thomas Bonte

The first session of the unconference took the entire GSoC audience, split it down the middle, and formed two long lines for a full morning of meet-and-greet handshaking. This provided attendees with an opportunity to meet each other and have conversations they may not have had otherwise during the busy summit.

The Chocolate Room

Behold, the annual cocoa cornucopia! Mentors from around the world packed  plenty of sweet treats to share with their fellow hackers. Milk chocolate, dark chocolate, bacon chocolate, and yes, even fish chocolate.

The GSoC Band

GSoC Band

Photo by Thomas Bonte

In the open source community, ad hoc collaborative teams are an everyday occurrence. But to see it happen outside of a source code repository, with a full drum set, five kinds of stringed instruments, a keyboard, and even an oboe... that is something you don't see everyday. Shout out to Saturday night's Emcee, host and bringer of instruments, Googler Marty Conner, who got the GSoC band back together for 2013.

The Sticker Swap

Over the course of the summit, Googlers would freshen the tables of swag at the front of the cafeteria. Tshirts, banners, stickers, and even GSoC socks! But Google wasn't the only team with a horse in the swag race. Mentors brought stacks of stickers from dozens of projects to participate in the annual sticker swap.

Googleplex tours

During the lunch hour each day, Stephanie Taylor and Mary Radomile of Google's Open Source Programs Office gave attendees guided tours of the Googleplex campus.

 With each new release of Google's Android operating system comes a new codename and a new statue in the Sculpture Garden. Note the new KitKat Android on the right side of the photo.

The Cakes

Thanks to Joel Sherrill with RTEMS, who supplied the templates for the giant Google Summer of Code birthday cakes, celebrating nine consecutive years of FOSS community engagement with the logos for each year of the program on two tasty cakes.

A new GSoC tradition

Based on feedback from last year's summit, the organizers agreed to put together a whole track of Google talks, given by current employees about a variety of projects, initiatives, and technologies. One of the more popular sessions was led by Wesley Chun, Developer Advocate with the Google Cloud Team. Chun talked about the Google Cloud Platform, its variety of services, and special discounts and support provided by Google to FOSS projects.

Big takeaways

GSoC Mentor Summit

Photo by Matthew Dillon

As a first-time Google Summer of Code Mentor, attending my first summit, I cannot even begin to recount all of the amazing things that occurred over the course of the weekend. If you clicked on the link at the top of this article for the 177 mentoring organizations represented at the summit, you can begin to imagine the sheer magnitude of talent, passion, and dedication that gathered in Mountain View.

As a storyteller, I accumulated thousands of words worth of notes from all the sessions I attended, which sadly, I cannot possibly share with all of you readers in a single post, so we're going to have to do a highlight reel of sorts.

Operating Systems Summit: When else do you see core developers from Gentoo, Debian, Fedora, NetBSD, FreeBSD, DragonFlyBSD, and others, all politicking in one place?

Gamification in FOSS session: Tales of developer incentivization were shared by projects such as Joomla, Battle For Wesnoth, and the Fedora Community.

Humanitarian Free Open Source Software (HFOSS) session: founders and members, met with representatives from other projects such as OpenMRS, Sigmah, PostgreSQL, The Sahana Software Foundation, The Tsunami Information Project, Mifos, NetBSD, SugarLabs, BRL-CAD, and a handful of others, to discuss our role as hackers to improve the conditions of our planet, and our species.

Outreach Program For Women: Led by Karen Sandler, Executive Director of the Gnome Foundation, who introduced the OPW, and discussed ways to bring more diversity to your FOSS project.

Next year will mark the 10th year of Google Summer of Code! In honor of the-big-one-oh, Google will be expanding the Google Summer of Code program 10% across the board:

  • 10% increase in student stipend
  • 10% increase in total number of students accepted
  • 10% more accepted Mentor organizations



Like what you see here? Is your project interested in mentoring? Are you a student that wants to get paid to work on free and/or open source software with world-class hackers? Then you should apply for Google Summer of Code 2014. See the original article for a list of important dates!

Originally posted on Google Summer of Code blog. Reposted using Creative Commons.

Lead Image: 
Google Summer of Code annual Mentor Summit
(7 votes)
Add This: 
Article Type: 
Default CC License: 

LibrePlanet2013 Keynote: RMS

Below you will find a rough transcription taken during the keynote
address at LibrePlanet 2013: Commit Change. It includes RMS's remarks,
as well as an incomplete transcription of remarks from the recipients
of the Free Software Awards. THIS IS NOT A COMPLETE TRANSCRIPT!

We've been at this for a long time. Now we're facing even harder challenges and threats than in the past. For a long time, pc arch was stable, and for a long time, ms-dos made it so much couldn't change. When it got perverted in the 90's, still the things we couldn't handle were limited to fringes. We started seeing peripherals with proprietary drivers due to c-crypt.

Then the BIOS suddenly became something that could be replaced. It wasn't just a piece of circuit we could ignore. We had to start developing Free BIOS' like Coreboot.

Things got even worse around 10 years ago. Manufacturers started to refuse to tell us what we needed to know to make Coreboot run on our machines. This Had to do with Digital restrictions management.

Then it got worse.

Intel and AMD processors require micro-code blobs, so we discovered blobs. It might has well have been a circuit before. But then it changed to software installed, which we do have to care about.

In the PC world, most things, the problems were at the edges. Now with mobile computing, disaster is spreading everywhere, like the dam broke. There is nothing comparable.

What we find now, is that they are building systems on a chip, and the company that makes chip, doesn't have control of what is on chip, it licenses parts from different places.

How do we pressure the company that makes the computer, to get it to pressure the chip producers, to pressure the company that designs the piece of that chip, to make it work for Freedom? Or we do reverse engineering, which is probably what it comes down to.

I tell universities to teach reverse engineering, and have them do it for some important peripherals.

In addition, we see the disaster, we see a lot of tyrannical devices designed so users can't replace them. Some android devices, some apple, and others. This "Tivo-ization", was the first time we noticed. Hardware stopping users from running Free software. This led me to realize we had to change the GPL so that Freedom 1 was practical Freedom, and not just some fantasy.

We see nasty things happening in initialization software, like M$ restricted boot.

There is a similar problem in the RaspberryPi, and the only way to make it work is with a blob. It is even worse than that, it can't even boot without the blob. There are other such boards that don't have that problem. We need to inform the public about this choice.

Lots of people are focused on Rpi, so I asked someone to make a list of products [like RaspberryPi] that respect Freedom more.

None of those things can do the job that this does. We need to have laptops and servers that you can run. The Northbridge now requires a blob... people who saw his was the case, it didn't occur to them that this was a disaster.

Our reverse engineering task list is growing, and we don't find many people who want to do this work. If you want to make a tech contribution to our Freedom, this is where we need you most. Please learn reverse engineering for the specs of peripherals, and help develop Free replacements, so we can use circuits without being under control.

I just got info about new mobile operating systems, like Firefox OS, which like android, uses non-Free software to talk to peripherals. It is not helping us, and won't enable us to get any closer to Freedom than we were without it. I'm afraid no such project will help us on that, they are not interested in addressing the hard problem of Freedom, they want success, and want to be popular, so they don' tackle the place where Freedom is being defeated, and replace the layers that we don't need to replace... they are not helping to reach the goal of Freedom.

I've found that the Chromebook is no better than anything else, but we still don't know about the ARM. It looks like, we are going to have to do something, to bring about the existence the computer you can run with a Free operating system. We could use all in the past, then some, and now and in the future, we're going to have to build and sell computers. We're going to have to raise money, and get into the habit of buying computers that were designed for Free software, instead of the old "liberate other user's computers." It used to be great to say 'bring your comp in, and we'll Free it.'

It is going to be a constant struggle to do that in the future. We can't put all of our eggs in that basket, we've got to push on the reverse engineering also. Once you can do it, teach others to do the same. We must do both efforts in parallel, and maybe one will be a success.

There are some Free software developers who seem to have a hunger to get their software into the apple 'crap' store. It doesn't allow Free software of course, apple won't approve unless it is non-Free. They feel the temptation to build a non-Free executable to run on the 'iThings.' It is better to say 'jail break your iThing, and then install the Free executable.'

If you are going to use an iThing, at least get out of the jail.

People are being tempted to fail to uphold the cause of Freedom, just to get more people to run their code. I think this is a poor choice of values.

What is more important, Freedom or being more popular?

We see people who want to have their Free program in the Apps store, but then they make another mistake, they think they should remove the Copyleft, which isn't necessary, since they can make executables. They don't need to change the license on the source. This is still a mistake, but a smaller mistake than changing their license on source code to a lax permissive license.

It's not just letting something through the wall, but taking the wall down entirely.

They will still use a Free software license, but abandon their attempts to take away Freedom. If you must cater to the crap store, you can release an executable without changing the license on the source-code. Better yet, release under GPL the executable, and jailbreak. The more things that require jailbreaks, the more incentive to install, and the bigger the fight will be against what Apple is doing. Remember the App store is really censorship, carried out for apple's biz interests. For society to make App store publication the standard, this is endorsing one company to make censorship a standard.

To convince people to stop using Copyleft is the most ironic horrible thing you can do in this area. I'll likely publish an article with more info later.

There recently started a campaign to amend the DMCA to permit unlocking any device. The campaign is called 'fix the DMCA', we have to take issue with this because it is not enough to fix it. The anti-circumvention provisions must be abolished. The digital handcuffs are nasty, and there should be no restriction on devices or using them to break handcuffs. That is not even enough. DRM should be illegal.


Yes, that proposed change would be a step forward, but if we are going to endorse it, we have to be careful to repeat "this is not enough, just part of it". There is momentum in a campaign, and the people saying "help us" and you say "I should boost the campaign, and take the small change" instead of keeping pressure for the big change. Every time we say we're in favor, we need to say "but... we need more!"

Now, I should mention a bit more about portable phones. Nowadays, there will be 2 computers [in a portable phone], a signal processor, where the program runs to handle radio communication with phone network, and the main computer to do most other things. If the software that controls the signal processor were fixed, we could consider it part of the circuitry, and ignore it. But it can be changed, without user's consent, remotely, back-door through phone network. That software can take control of the other machine, like sharing access to all the memory. The phone network can say 'overwrite the software in the main computer' You can install Replicant, but that doesn't give you control over your computing you can trust, it can be replaced anytime with something malicious.

We need to get a phone design such that the signal processor cannot control the main computer. This was the case with OpenMoko. The signal processor couldn't do anything to the main computer.

Or we need software in the signal processor. There is for one job, the voice protocol, GSM. There is a different protocol for data, which means starting completely from scratch. Now if we had a completely Free software cell phone, or one where the signal processor didn't have a chance to be malicious, would that make it a good thing to use? There is no way for the phone to talk with the phone network, without networks knowing where the phone is.

This is Stalin's dream, and I won't own one.

If you had a parabolic antenna, then maybe you could point that at one particular tower, and maybe that would be the only tower that would get signal from you. Then you couldn't be triangulated. I don't know if this works in practice. There is no perfect directional antenna. Size makes a practical difference too. *laughter* It would be interesting to have someone try this out, someone who knows about antennas, and give it a try, or calculate what would happen.

There are services that you can ask to tell you where you are. You can use this to find out if you can make a difference in practice. They'd still know you're in a particular city, but that is a lot less than knowing what block you are on.

Now it is time for the awards. First the award for contribution to Free software. This goes to a project, which is an interactive programming environment in Python, called IPython, to Fernando Perez. I want to say one thing: if you write a substantial program in python, app or package, please release under the GPL. It is extremely important to do this.

Fernando Perez

Thank you everyone, I'll be breif. I want to thank the FSF for the work you all have done to enable the stuff we do today, despite the threats we face today. When I worked on IPython, it wasn't clear it would be valuable, but that it would be a blackhole for time. *laughter* Those close to us, like my wife, pay the highest price, so I want to thank her for that. It is the project of a community, Brian Granger(sp), and Berkeley contributer [name lost].

People in the scientific python community, I spend time with scientists who are interested in software, Eric Jones helped in the beginning, and I want to thank him. Also thank you GVR, who created python. And finally UC Berkeley, who allows me to do work that straddles academic science.

We have some support form the Sloan foundation now thankfully. IPython started it's life as a shell, and has grown into a tool. It is not just a scientific project, code is everywhere, and this is about interaction. IPython is language agnostic by-and-large, but we want to appeal to other languages soon.

This award is dedicated to author of Matplotlib author, John Hunter. He was like a brother, and we worked for 10 years together. This is for him. *applause*


The award for using Free software, or the spirit of Free software to make a better world, the social world, is OpenMRS. *applause*


Thank you for this very important reward, nd recognition that have made MRS a success. Big thanks to RMS, the committee, and FSF.

MRS came together to solve a problem in healthcare for developing countries, providing access to health info, and make access and key issue. It was key to the project, people needed access to data, supply management, and X-rays. Free software was part of it from the beginning. The best way to empower people, particularly in developing countries, for their needs, is to use Free Software. We trained 30 programmers in Rwanda, and they can now implement themselves and have made a hospital using the software. For nearly a decade we've been doing this. Thank you Bill Tierney, and many other colleagues. Ben Wilk, key devs. Many of the programmers in Uganda, and around the world. Rockefeller foundation, and others.

US CDC, and the WHO. We've had strategic partners in Rwanda, and Kenya, and Haiti. I'd like to thank the FSF, I had an office next to Richard in the 90's.

Free Software has 50 countries using our system.


I made the suggestion that if they changed the name to LibreMRS, they'd do better.


I want to thank everyone for being here, we start at 9am, and breffest at 8:15. We'll see you in the morning.