Martin Albisetti's blog


Click packages and how they’ll empower upstreams

As the pieces start to come together and we get closer to converging mobile and desktop in Ubuntu, Click packages running on the desktop start to feel like they will be a reality soon (Unity 8 brings us Click packages). I think it's actually very exciting, and I thought I'd talk a bit about why that is.

First off: security. The Ubuntu Security team have done some pretty mind-blowing work to ensure Click packages are confined in a safe, reliable but still flexible manner. Jamie has explained how and why in a very eloquent manner. This will only push further an OS that is already well known and respected for being a safe place to do computing for all levels of computer skills.
My second favorite thing: simplification for app developers. When we started sketching out how Clicks would work, there was a very sharp focus on enabling app developers to have more freedom to build and maintain their apps, while still making it very easy to build a package. Clicks, by design, can't express any external dependencies other than a base system (called a "framework"). That means that if your app depends on a fancy library that isn't shipped by default, you just bundle it into the Click package and you're set. You get to update it whenever it suits you as a developer, and have predictability over how it will run on a user's computer (or device!). That opens up the possibility of shipping newer versions of a library, or just sticking with one that works for you. We exchange that freedom for some minor theoretical memory usage increases and extra disk space (if 2 apps end up including the same library), but with today's computing power and disk space cost, it seems like a small price to pay to empower application developers.
Building on top of my first 2 favorite things comes the third: updating apps outside of the Ubuntu release cycle and gaining control as an app developer. Because Click packages are safer than traditional packaging systems, and dependencies are more self-contained, app developers can ship their apps directly to Ubuntu users via the software store without the need for specialized reviewers to review them first. It's also simpler to carry support for previous base systems (frameworks) in newer versions of Ubuntu, allowing app developers to ship the same version of their app to both Ubuntu users on the cutting edge of an Ubuntu development release, as well as the previous LTS from a year ago. There have been many cases over the years where this was an obvious problem, OwnCloud being the latest example of the tension that arises from the current approach where app developers don't have control over what gets shipped.
I have many more favorite things about Clicks, some more are:
- You can create "fat" packages where the same binary supports multiple architectures
- Updated between versions is transactional so you never end up with a botched app update. No more holding your breath while an update installs, hoping your power doesn't drop mid-way
- Multi-user environments can have different versions of the same app without any problems
- Because Clicks are so easy to introspect and verify their proper confinement, the process for verifying them has been easy to automate enabling the store to process new applications within minutes (if not seconds!) and make them available to users immediately

The future of Ubuntu is exciting and it has a scent of a new revolution.

Filed under: Canonical, Ubuntu 4 Comments

Engineering management

I'm a few days away from hitting 6 years at Canonical and I've ended up doing a lot more management than anything else in that time. Before that I did a solid 8 years at my own company, doing anything from developing, project managing, product managing, engineering managing, sales and accounting.
This time of the year is performance review time at Canonical, so it's gotten me thinking a lot about my role and how my view on engineering management has evolved over the years.

A key insights I've had from a former boss, Elliot Murphy, was viewing it as a support role for others to do their job rather than a follow-the-leader approach. I had heard the phrase "As a manager, I work for you" a few times over the years, but it rarely seemed true and felt mostly like a good concept to make people happy but not really applied in practice in any meaningful way.

Of all the approaches I've taken or seen, a role where you're there to unblock developers more than anything else, I believe is the best one. And unless you're a bit power-hungry on some level, it's probably the most enjoyable way of being a manager.

It's not to be applied blindly, though, I think a few conditions have to be met:
1) The team has to be fairly experienced/senior/smart, I think if it isn't it breaks down to often
2) You need to understand very clearly what needs doing and why, and need to invest heavily and frequently in communicated it to the team, both the global context as well as how it applies to them individually
3) You need to build a relationship of trust with each person and need to trust them, because trust is always a 2-way street
4) You need to be enough of an engineer to understand problems in depth when explained, know when to defer to other's judgments (which should be the common case when the team generally smart and experienced) and be capable of tie-breaking in a technical-savvy way
5) Have anyone who's ego doesn't fit in a small, 100ml container, leave it at home

There are many more things to do, but I think if you don't have those five, everything else is hard to hold together. In general, if the team is smart and experienced, understands what needs doing and why, and like their job, almost everything else self-organizes.
If it isn't self-organizing well enough, walk over those 5 points, one or several must be mis-aligned. More often than not, it's 2). Communication is hard, expensive and more of an art than a science. Most of the times things have seemed to stumble a bit, it's been a failure of how I understood what we should be doing as a team, or a failure on how I communicated it to everyone else as it evolved over time.
Second most frequent I think is 1), but that may vary more depending on your team, company and project.

Oh, and actually caring about people and what you do helps a lot, but that helps a lot in life in general, so do that anyway regardless of you role  🙂

Filed under: Canonical No Comments

A story on finding an elusive security bug and managing it responsibly

Now that all the responsible disclosure processes have been followed through, I’d like to tell everyone a story of my very bad week last week. Don’t worry, it has a happy ending.


Part 1: Exposition

On May 5th we got a support request from a user who observed confusing behaviour in one of our systems. Our support staff immediately escalated it to me and my team sprung into action for what ended up being a 48-hour rollercoaster ride that ended with us reporting upstream to Django a security bug.

The bug, in a nutshell, is that when the following conditions lines up, a system could end up serving a request to one user that was meant for another:

- You are authenticating requests with cookies, OAuth or other authentication mechanisms
- The user is using any version of Internet Explorer or Chromeframe (to be more precise, anything with “MSIE” in the request user agent)
- You (or an ISP in the middle) are caching requests between Django and the internet (except Varnish’s default configuration, for reasons we’ll get to)
- You are serving the same URL with different content to different users

We rarely saw this combination of conditions because users of services provided by Canonical generally have a bias towards not using Internet Explorer, as you’d expect from a company who develops the world’s most used Linux distribution.


Part 2: Rising Action

Now, one may think that the bug is obvious, and wonder how it went unnoticed since 2008, but this really was one was one of those elusive “ninja-bugs” you hear about on the Internet and it took us quite a bit of effort to track it down.

In debugging situations such as this, the first step is generally to figure out how to reproduce the bug. In fact, figuring out how to reproduce it is often the lion’s share of the effort of fixing it.  However, no matter how much we tried we could not reproduce it. No matter what we changed, we always got back the right request. This was good, because it ruled out a widespread problem in our systems, but did not get us closer to figuring out the problem.

Putting aside reproducing it for a while, we then moved on to combing very carefully through our code, trying to find any hints of what could be causing this. Several of us looked at it with fresh eyes so we wouldn’t be tainted by having developed or reviewed the code, but we all still came up empty each and every time. Our code seemed perfectly correct.

We then went on to a close examination of all related requests to get new clues to where the problem was hiding. But we had a big challenge with this. As developers we don’t get access to any production information that could identify people. This is good for user privacy, of course, but made it hard to produce useful logs. We invested some effort to work around this while maintaining user privacy by creating a way to anonymise the logs in a way that would still let us find patterns in them. This effort turned up the first real clue.

We use Squid to cache data for each user, so that when they re-request the same data, it’s queued up right in memory and can be quickly served to them without having to recreate the data from the databases and other services. In those anonymized  Squid logs, we saw cookie-authenticated requests that didn’t contain an HTTP Vary header at all, where we expected it to have at the very least “Vary: Cookie” to ensure Squid would only serve the correct content all the time. So we then knew what was happening, but not why. We immediately pulled Squid out of the middle to stop this from happening.

Why was Squid not logging Vary headers? There were many possible culprits for this, so we got a *lot* of people were involved searching for the problem. We combed through everything in our frontend stack (Apache, Haproxy and Squid) that could sometimes remove Vary headers.

This was made all the harder because we had not yet fully Juju charmed every service, so could not easily access all configurations and test theories locally. Sometimes technical debt really gets expensive!

After this exhaustive search, we determined that nothing our code removed headers. So we started following the code up to Django middlewares, and went as far as logging the exact headers Django was sending out at the last middleware layer. Still nothing.


Part 3: The Climax

Until we got a break. Logs were still being generated, and eventually a pattern emerged. All the initial requests that had no Vary headers seemed for the most part to be from Internet Explorer. It didn’t make sense that a browser could remove headers that were returned from a server, but knowing this took us to the right place in the Django code, and because Django is open source, there was no friction in inspecting it deeply.  That’s when we saw it.

In a function called fix_IE_for_vary, we saw the offending line of code.

del response['Vary']

We finally found the cause.

It turns out IE 6 and 7 didn’t have the HTTP Vary header implemented fully, so there’s a workaround in Django to remove it for any content that isn’t html or plain text. In hindsight, if Django would of implemented this instead as a middleware, even if default, it would have been more likely that this would have been revised earlier. Hindsight is always 20/20 though, and it easy to sit back and theorise on how things should have been done.

So if you’ve been serving any data that wasn’t html or plain text with a caching layer in the middle that implements Vary header management to-spec (Varnish doesn’t trust it by default, and checks the cookie in the request anyway), you may have improperly returned a request.

Newer versions if Internet Explorer have since fixed this, but who knew in 2008 IE 9 would come 3 years later?


Part 4: Falling Action

We immediately applied a temporary fix to all our running Django instances in Canonical and involved our security team to follow standard responsible disclosure processes. The Canonical security team was now in the driving seat and worked to assign a CVE number and email the Django security contact with details on the bug, how to reproduce it and links to the specific code in the Django tree.

The Django team immediately and professionally acknowledged the bug and began researching possible solutions as well as any other parts of the code where this scenario could occur. There was continuous communication among our teams for the next few days while we agreed on lead times for distributions to receive and prepare the security fix,


Part 5: Resolution

I can’t highlight enough how important it is to follow these well-established processes to make sure we keep the Internet at large a generally safe place.
To summarise, if you’re running Django, please update to the latest security release as quickly as possible, and disable any internal caching until then to minimise the chances of hitting this bug.

If you're running squid and want to check if you could be affected, here's a small python script to run against your logs we put together you can use as a base, you may need to tweak it based on your log format. Be sure to run it only against cookie-authenticated URLs, otherwise you will hit a lot of false positives.

Filed under: Canonical No Comments

On open sourcing Ubuntu One filesync

This week has been bitter-sweet. On the one hand, we announced that a project many of us had poured our hearts and minds into was going to be shut down. It’s made many of us sad and some of us haven’t even figured out what to do with their files yet    🙂

On the other hand, we’ve been laser-focused on making Ubuntu on phones and tablets a success, our attention has moved to making sure we have a rock-solid, scalable, secure and pleasant to use for developers and users alike. We just didn’t have the time to continue racing against other companies whose only focus is on file syncing, which was very frustrating as we saw a project we were proud of be left behind. It was hard to keep feeling proud of the service, so shutting it down felt like the right thing to do.

I am, however, very excited about open sourcing the server-side of the file syncing infrastructure. It’s a huge beast that contains many services and has scaled well into the millions of users.

We are proud of the code that is being released and in many ways we feel that the code itself was successful despite the business side of things not turning out the way we hoped for.

This will be a great opportunity to those of you who’ve been itching to have an open source service for personal cloud syncing at scale, the code comes battle-tested and with a wide array of features.

As usual, some people have taken this generous gesture “as an attempt to gain interest in a failing codebase”, which couldn’t be more wrong. The agenda here is to make Ubuntu for phones a runaway success, and in order to do that we need to double down on our efforts and focus on what matters right now.

Instead of storing away those tens of thousands of expensive man-hours of work in an internal repository somewhere, we’ve decided to share that work with the world, allow others to build on top of that work, benefit from it.

It’s hard sometimes to see some people trying to make a career out of trying to make everything that Canonical does as inherently evil, although at the end of the day what matters is making open source available to the masses. That’s what we’ve been doing for a long time and that’s the only thing that will count in the end.


So in the coming months we’re going to be cleaning things up a bit, trying to release the code in the best shape possible and work out the details on how to best release it to make it useful for others.

All of us who worked on this project for so many years are looking forward to sharing it and look forward to seeing many open source personal cloud syncing services blossoming from it.


It’s time

So, I've been around the Ubuntu community for a while. I installed 4.10 (Warty Warthog) as soon as it came out, I was fighting to keep my Debian installation usable at the time. I instantly fell in love and dove into the community, I wanted to do whatever I could to make the project succeed. It was exactly what I was looking for. At the time, Canonical was also shipping CDs to anyone who wanted them, which gave the project a much more professional feel to it.
And, the focus Mark set for the project turned out to be the right one, it very quickly converted thousands of open source enthusiasts to it and a solid, technically capable community started to be built around it. Soon enough, with the focus laser-sharp on making Ubuntu as usable as possible, non-technical folks started to show up, people who were Windows users but were tired of it and looking for something better. These people gave our project an awesome foundation for support (once they figured out how to make certain things work, they'd immediately help the next person who came along with this problem). Translations grew, since it was a great way for a non-technical person to help. documentation grew, advocacy grew, communication, marketing, you name it, it was growing.
As things moved forward, there were some tough decisions to be made. I remember when Compiz came around, it was very immature and almost guaranteed it would break your system, just have a quick read through the Slashdot comments! You could very easily replace the word "compiz" for "unity" when it was first introduced and you'd have most of the same comments that went on when that first happened.
But, it was the right choice. The hard and unpopular choice. We, the community at large, mostly wanted a stable system. Mark, Canonical, were pushing to mature the technology so be able to build awesome things on top of it. It was the same story for Pulseaudio, the same for binary drivers, we've been here before, over and over.Change is very hard, and a lot of it feels wasteful. Nobody wants to waste their free time, you want to make it count.

As for where we stand today, I first want to be clear that my initial reaction to the flood of changes being proposed upset me as well. A lot. I laid low for a while so I could clear my head and understand what was going on before reacting. When the Rolling Releases proposal came out, I read the email on ubuntu-devel (which, btw, is where I read about it, there was no internal Canonical "announcement") and I was frustrated with how it was being presented. It felt like Canonical imposing whatever they wanted, bulldozing over the community. How could Rick do something like that? He's a smart and well-intentioned person, this isn't the smart thing to do. I started writing up an angry email to the Community Council, and as I did, I stopped to re-read the original email to rant with specific references. When I did, I couldn't believe my eyes. The email was clearly stated as a proposal, open to discussion with quite of bit of work done beforehand, ending the email with:

"Such a change needs to be discussed in the Ubuntu community. Therefore, I asked my team to put together a strawman proposal for how such moving to a monthly cadence with rolling release might work."

Go ahead, read it yourself. As a long-time member, my gut feeling is that in the past this would of been presented to the Technical Board first to be discussed, and then a wider conversation would be had. But the reverse makes more sense to me actually, have a wider conversation first, then bring it to the Technical Board.
So, now I deleted my email and started all over again. I explained how I was feeling rather than rant about things that apparently didn't happen as I imagined them, and just admitted that I no longer knew where we were as a project and needed to talk it out a bit.
So we did. We talked, vented, ranted, looked at the positive side of things, the negative, remembered the past, imagined the future.

The way I see things now is that the project has changed. But this was the path all along, it should of been more obvious. First we won the Linux distro user base, gained support, a community, a clearer focus on what less technical people wanted and it felt great. People were moving to Ubuntu left and right, first on the desktop, then the server migrations came along with it. But that was not the goal. The goal was (and I quote from bug #1) "Our work is driven by a belief that software should be free and accessible to all.". The "all" part of that is the key. That's why we made the desktop slow and buggy for a while to introduce compiz, even though it didn't really fill any need for technical users. Same with Unity, same with Pulseaudio, same with the Ubuntu font, same with shipping free CDs to anywhere in the world.
So as we progressed in our goal, technical users felt a bit more and more distant from what was changing, because they were no longer the primary user. It makes the "scratch your own itch" part of free software a bit harder. In exchange, I started to meet taxi drivers who were Ubuntu users, musicians, graphic designers, writers. I'd see Ubuntu out in the while in the strangest places.

And now, the world has changed. It no longer seems like the way to make computing available as free software to everyone can be accomplished with just a great desktop. Mobile phones and tablets is where most of people's time seems to be shifting to. It's a multi-device world and it's here to stay. If we want to fix bug #1, we now need to change tactics and tackle the full story. There seems to be a window of opportunity for us as a project right now, I don't think we'll get many more of these. It feels like a now-or-never kind of moment, and I can't imagine having invested most of my energy in the last 8 years fading away into a niche market. That's not what I set out to help do.
It's going to be a bumpy ride for a while, we need to move fast, and speed is not one of the easiest things to do when you need to find consensus across many different people, timezones, interests, goals, agendas and languages. I don't see what other choice we have than to rise up to the challenge and find a way to make it work.

Speaking purely from a personal point of view, I think Canonical will need to push harder for changes in processes, tools, libraries and focuses. I also happen to think Canonical has done poorly at presenting and driving these changes. Not due to a lack of trying to do the right thing, it's just really hard to do. Stress, pressure, deadlines, partners, confidentiality agreements, private negotiations, business deals to ship Ubuntu on millions of devices, it all sets you up to rush and get things done as quickly as possible. That's how the market works. But when you're not immersed in all of that, from the outside, it just looks slightly evil and a bit like bullying.
I think Canonical can and will do better, it has to, I feel the survival of the company partially depends on it.

One thing to remember though, is that free software is very much like evolution, survival of the fittest. This means trying out many different things, and the best ones overall survive and thrive. Competition is essential. The fact that Canonical is putting out there more free software projects is the best thing that can happen to the movement, no matter how many times you yell out that you know for a fact that if that same effort was spent on an existing project it would all be better. If that were true, there would be one Linux distro, period.
As long as it's free software, and Canonical is shoveling code into it, that's what counts at the end of the day. Working, maintained code. Don't forget that. If Canonical is wrong about, let's say, that investing in Mir is a better bet than investing in Wayland, ultimately, it's Canonical's money. If it's done in a way that developers are drawn to help, it'll be cheaper and happen faster. It's a win-win. The fact that they are betting on free software no matter what is what counts.

So I think it's time. In many ways this feels like the last big battle. We fought and won a lot to get here, it's now time to win or loose the war.

Filed under: Canonical, Ubuntu 11 Comments

Remote working

There seems to be quite a bit of buzz around Yahoo! effectively laying off remote workers (making them choose to start going to an office or resign), and I've read different perspectives on the subject, for and against remote working.
Having worked at Canonical for over 4 years, and in open source projects for quite a bit longer than that, my knee-jerk reaction is that the folks crying out that remote working just isn't as productive as working in an office is pretty short-sighted.
Canonical has hundreds of employees working remotely, far more than working in an office, and it seems like we're generally a very productive company. We take on huge competitors who have ten times the amount of people working on any given project, and we put up a pretty good fight. So I can tell you remote working is full of awesome for both the company (productivity, get to choose from a huge pool of talent) and the employee (no commute, less distractions).
I also think that the fact that open source projects are taking over the world at an incredible pace is a pretty huge testament to just how great remote working can be. This is even an extreme case where people aren't even available on a regular schedule with much tighter and clearer shared goals.


All that said, there are several ways things can go wrong with remote working.

Thoughtlessly mixing remote and co-located teams. All-remote and all co-located tends to work out easier. Mixing these things without having a clear plan on how communication is going to work is most likely going to end up badly. The co-located team will tend to talk to each other in the hallways and not bring the people who are remote into the loop, mostly because of the extra cost of communication there. If making decisions in person is accepted, and there are no guidelines in place to document and open up the discussion to the full audience, then it's going to fail. Regardless of remote-or-not, documenting these things is good practice, it provides traceability and there's less room for people to go away with different interpretations.

Hiring remote workers that are not generally self-directed. I can't stress this point enough. Remote working isn't for everybody, you have to make sure the people who are working remotely are generally happy making decisions on their own on a daily basis, can push through problems without a lot of hand-holding and are good at flagging problems when they see one. These types of people are great to have on site as well, but in a remote situation this is a non-negotiable skill.

Unclear goals as a team or company. If what people are suppose to be doing isn't crystal clear to everybody involved remote working is going to be very messy. Strongly self-directed people are going to push forward with what they think is the right thing to do (based off of incomplete information), and people less strongly independent are going to be reading a lot of RSS feeds.


I also think there are some common sense arguments against remote working that are actually an argument in favor of it.

Slackers will slack harder when at home. So, if you're at home, who's going to know if you spent your morning watching TV or thinking about a really hard problem? When you're at the office, it's much easier to check up on what you're doing with your time. I think that if you have an employee that you need to check up on what he's doing with his time, you have a problem. The answer is not going to be to put him in an office and get him to learn how to alt-tab very quickly to an IDE when you walk by. You should be working with them to make sure their performance is adequate. If it's not, and you can't seem to find a way around it, fire him. Keeping him around and force-feeding work is a huge waste of time and money. Slackers are going to slack harder at home, use that to your advantage to get rid of people who aren't up to task or don't care anymore quicker.

Communication is more expensive. It is. It also forces people to learn how to communicate better, more concisely, and in a way that's generally documented. While you can easily have calls, in the end you need to email a list or some form of communication that reaches everybody. So there's a short-term cost for a long-term benefit. You may need that short-term benefit right now, in which case you bring people together for a week or two, spend some of that money you've saved on infrastructure, and push things forward.


So, in general I think having remote workers forces a company to have clearer, well-communicated goals, better documentation on decisions, hiring driven and self-directed people makes you think long and hard about your processes and opens up to hiring from a much larger pool of people (all over the world!). I think those are great things to have pressuring you consistently, and will make you a better company for it.
Like everything else, if you have remote workers and pretend they are the same as co-located it's going to fail.


Losing perspective

12.10 is out, how awesome is that? Go ahead and get it if you haven't yet. I've upgraded all my computers months ago and they've been stable and receiving polish and new features almost every day since, how awesome is that? It has tons of new features that put closed-source competitors to shame, how incredibly awesome is that!? It looks nicer, it works faster on my slower machines and a lot of the small bugs in 12.04 have magically gone away, awe-some.

Then, as if things couldn't seem better in a project nearing it's 10th year of attempting to take over the world in a lot of very literal ways, Mark spontaneously decides to take on more financial risk by further opening up the current skunkworks projects Canonical works on and what happens? A lot of crap gets thrown his way. How insane is that?

I can understand competitors taking the opportunity of spinning this as a bad thing, highlighting the fact that there are such projects at all, and how X or Y project is 100% open and pure (although, maybe not as successful). Then there's the usual Ubuntu trolls, folks who are bitter about Ubuntu being successful in the format that it adopted, blending commercial and community development in a unique way that requires a constant balancing act. They were betting on Ubuntu failing and they hate that it hasn't, they hate that for a huge number of people "Linux" actually means "Ubuntu". They also hate that there are millions of people who don't even know (or care) what Linux is, and happily use Ubuntu. That's fine, this is how life works, let them be bitter.
But I cannot understand strong, long-time Ubuntu members and contributors bashing Mark, Canonical or Ubuntu. It feels very disconnected from reality.
I can understand Unity sucked, everybody hated it and it made everything slow. It doesn't any more. In fact, it's crazy fast, crazy stable and it sets us apart from everybody else by a very long stretch. In some areas we leap-frogged a worthy competitor like Apple, and in many cases even forgot about Windows, our bug #1. This happened with many things, compiz, pulseaudio, empathy, you name it. Those sucked too, but ultimately rocked. For us, and for the rest of the open source ecosystem.

And yes, now you can purchase things from the Dash. It'll offer up items even though you maybe weren't looking to buy something, just opening your email. But it helps the project, it helps fund the very same things that make Ubuntu different from everyone else because we get to invest an enormous amount of money in user testing, design, custom engineering and closing deals with OEMs so Ubuntu ends up in the hands of millions of new users every year. I have an unfair advantage over most of you since I've worked at Canonical for over 4 years now and have seen a lot of what it costs in terms of actual dollars. It's not that hard to imagine, though, flying hundreds of people across the globe every 6 months to get together, work and make it feel more like a community, by any simple math it is hundreds of thousands of dollars. That is a lot of money. And when you complain about a feature which you can ultimately disable bothers you and should be removed (or disabled by default, cutting off the actual chance that it'll generate any significant revenue), also take a minute to think that you're saying to Mark he should take that money out of his own pocket instead just so you can feel more comfortable with yourself. I can empathise with people immediately thinking of all the terrible examples of OEMs bundling adware with their computers that annoy people to no end, just to squeeze out every single penny out of each user to bump up their stock. But this is not the same, Mark's been crystal clear that there is a lot being developed to make this a fantastic experience, I have inside knowledge to vouch for that. It is also all free software, it has been for almost 10 years, consistently, and has shown no signs of changing that. In fact, I started writing this because Canonical is trying to make the few bits that aren't fully permeable to the community more open. How fucking awesome is that?

I think it's time to stop, breathe and gain some perspective again.

Filed under: Canonical, Ubuntu 18 Comments

Real collaborative design with open source software

Last week we organized a local Ubuntu conference in Buenos Aires, Argentina, which we plan on making it a regional conference from now on thanks to the help from our friends in the Uruguay LoCo. The conference was great but by far what stayed with me was a talk and some subsequent conversations with Guillermo Espertino about how a new-ish and small group of designers that used open source software to design professionally had gotten together and started a community called Gráfica Libre. These guys, individually do some very amazing things. As a group they've blown my mind  🙂

These are designers who are using 100% no-excuses free software on a daily basis to design and ship professional designs to customers.

These are some of the things they've designed as a group for the conference:

The video was edited by Guillermo Espertino and the 3D animation done by Martin Eschoyez. The blender source files are available on his website.

This was done by Lucas Romero


There's a presentation given by Guillermo Espertino (you can see the work his company does with open source in their website you can download it (it is in spanish, though) and it highlights the challenges they've faced so far in putting together designs in the open and collaboratively. They still feel they have a few iterations to go until they have a settled process, but it certainly does look like they've cracked the hardest part to me.


Help us organize UbuConLA 2012!

While a lot of you are at UDS, several Latin American LoCos are working hard to organize a local Ubuntu conference.
Things are going really well, we're 4 weeks away, but we're a little short on funds. Every year the same people who organize it end up having to pay for many things themselves despite have a few generous sponsors, and this year I'd like to change it so I set up a small but valuable fund raising campaign and we could really use your help.
The site is in Spanish, so it may take a bit of blind surfing to get around but it should be fairly easy once you've been sent to PayPal  🙂

If you have a some spare change, head on over here:


Ubucon 2012, Buenos Aires edition

This June 1st and 2nd, we will be holding an all-Ubuntu conference for the second time in Argentina, and with plans to make it regional from now on (next one is in Uruguay!).
Even though it's in Spanish, I'd like to open up the Call for Papers here on planet Ubuntu as well, in case anyone reading is close by  🙂



Ubuntu-AR y Ubuntu-UY, grupos locales de Ubuntu para Argentina y Uruguay, convocan a miembros de
la comunidad de software libre internacional y de otros grupos locales de Ubuntu en Latinoamerica a presentar propuestas de charlas para la conferencia anual sobre Ubuntu de Latinoamerica, UbuConLA 2012.


Qué es UbuConLA ?

Un acontecimiento internacional, anual e itinerante para Latinoamerica surgido a partir de una idea comĂşn entre miembros de los grupos comunitarios locales de Ubuntu en Argentina y Uruguay, con los siguientes objetivos:

  • Difundir la capacidad y experiencias logradas en ambientes empresariales por especialistas de LatinoamĂ©rica en proyectos y contextos de diversas caracterĂ­sticas
  • Mostrar el grado de madurez alcanzado por Ubuntu GNU/Linux y los profesionales que trabajan con Ă©l en ambientes empresariales, ya sea tanto como consultores como tambiĂ©n usuarios y responsables de áreas de sistemas
  • Integrar tĂ©cnica y socialmente a usuarios y especialistas de LatinoamĂ©rica, tanto sea para adquisiciĂłn de nuevos conocimientos y habilidades como tambiĂ©n para aprovechar y/o generar oportunidades de negocios en la regiĂłn
  • Difundir el espĂ­ritu "Ubuntu" de la comunidad Latinoamericana
  • Institucionalizar UbuConLA como "El Acontecimiento Ubuntu" anual para Latinoamerica.

PrĂłximas sedes: 2013 - Montevideo, Uruguay. 2014 - Colombia

Dónde, cuándo y cómo

La conferencia se realizará los dias 1 y 2 de Junio 2012 en la sede de la Ciudad de Buenos Aires de la Universidad Austral, en  Argentina.

El autor de cada conferencia seleccionada deberá participar presencialmente como orador en el acontecimiento.

Podrán participar como máximo 3 autores por conferencia, que deberá ser expuesta en Español.

Las propuestas deben ser enviadas via e-mail en texto plano con archivo adjunto, en alguno de los formatos indicados más abajo, a con la etiqueta [CFC] y a continuación el título de la conferencia a proponer en el Asunto del mensaje.

Las propuestas serán recibidas hasta el día 1 de Mayo 2012 inclusive.


El día viernes será dedicado a empresas y profesionales con una disposición de 3 tracks presentándose en paralelo:

  • Track 1: Casos de exito: por quĂ© funcionĂł la migracion/proyecto - CaracterĂ­sticas del contexto - Consideraciones previas para minimizar riesgos de fracaso
  • Track 2: CĂłmo migrar a Ubuntu en empresas - MetodologĂ­as y mejores prácticas aplicadas - Cuándo y por quĂ© usar Ubuntu en PyMES
  • Track 3: Soporte corporativo para Ubuntu - Tengo un problema, y ahora quĂ© hago ? - Alternativas locales de soporte

El día sábado estará orientado a entusiastas y comunidad en general, también con 3 tracks presentándose en paralelo:

  • Track 1: Casos de exito: Comunitarios
  • Track 2: Principiantes - QuĂ© es Ubuntu? - CĂłmo instalar Ubuntu - InstalĂ© Ubuntu, y ahora? - Migrando de Windows a Ubuntu - Participando en la comunidad (IntroducciĂłn) - Charlas relámpago (5 minutos)
  • Track 3: Usuarios experimentados - Personalizaciones - Ubuntu Server - Unity, por quĂ©? - CĂłmo reportar bugs en Ubuntu - Ubuntu TV/Android - Charlas relámpago (5 minutos) - Otros temas de software libre - comunidad relacionados con Ubuntu


Las propuestas deberán contener la siguiente información:

  • TĂ­tulo
  • Autor - Nombre completo
  • OrganizaciĂłn a la que pertenece/representa
  • Lugar de residencia
  • Extracto/sĂ­ntesis biográfica del autor
  • TelĂ©fono y direcciĂłn de correo electrĂłnico de contacto
  • Track en el cual quiere presentar su exposiciĂłn
  • Tipo (taller Ăł charla)
  • DescripciĂłn (resumen o esquema que permita evaluar su calidad y punto de vista)
  • DuraciĂłn estimada (las charlas son generalmente de 25 minutos)
  • Requisitos/recursos necesarios (Equipos Multimedia, Sala de máquinas, equipo de sonido, etc)
  • Nivel (básico, intermedio, avanzado)
  • Destinatarios (Sociedad, Empresas, TĂ©cnica)
  • Conocimientos previos de la audiencia.

Una vez evaluada la propuesta, se les informará a los autores el resultado de la selección por las vía de contacto facilitadas.

Formato para las Presentaciones

Formatos aceptados

  • ODT (LibreOffice y equivalentes)
  • HTML standard
  • PDF
  • Texto plano


Deberá especificarse una licencia para las presentaciones que permita a los organizadores distribuir el materia libremente.  Cualquier consulta sobre este tema pueden canalizarla via

Agradecemos la difusiĂłn de este llamado y del acontecimiento en sĂ­.

Filed under: Canonical, Ubuntu No Comments