jueves, 26 de febrero de 2015

The two sides of Pi

Raspberry took the world by surprise when it first introduced the Pi; a "complete", small and inexpensive computer.

However, it was really nothing new. Maybe most people didn't realize (or didn't want to) that you do already have a fully-fledged computer on your pocket. In fact, latest ARM SoCs are as powerful as an entry level desktop PC and, in the graphics side, manufacturers like NVIDIA with its Tegra X1 are getting there with previous-gen gaming consoles. In fact, as new-gen haven't evolved that much in terms of graphics, it's most likely the gap will be closed around mid-life for both PS4 an Xbox ONE.

The fact that you have a PC in your pocket is more true when it comes to Atom-based phones. You do really have a (traditional) PC-architecture CPU in your phone!

The best thing about the Pi, from my humble point of view, is not that it's a cheap (underpowered) PC, but the development community around it, which drove a development fever.

It's not the first time we see this. We have also seen it in Arduino, mobile apps and webpages before them. From time to time it comes a new technology or a new approach in old technology that makes developers and non-developers jump on the trending bandwagon which has two sides: fast evolution of the technology is one and intrusiveness and bad results is the other. In the 90s, for example, everyone with a PC and Frontpage Express became a web-designer/developer, even if they had no idea about design, both aesthetically and technically. This helped the web to grow extremely fast, but sometime under a pretty face, a technical mess was hidden.

So, as time passed by, a part of the business was kept in hands of non-prepared "developers" that had some special interest in computers and started investigating how to do things (which is good), but never studied IT (which is not so good if that's your intended job). However, time has come when IT is so deeply bonded to our lives that some Governments and institutions want computer programming to be a subject like physics or history.

Nowadays, we have a set of great tools for this, like Scratch, drag-and-drop interfaces for Arduino, high level-C libraries for Pi... But, again fmhpov, this can make a very distorted idea of what Computer Engineering and software development is: just moving blocks around and things work magically.

We have the experience that most of the times software factories don't work. And I don't mean software development centers, but factories in the classic way, where developers have a prefect specification and thus, can even be replaced by monkeys with a keyboard. Most of the times, developers face implementation problems that are not covered in docs and tech diagrams: datetime formats and zones, database transactions, client-server communications, sessions, etc. are everyday issues that arise during development and developers must deal with in an intelligent way. At the end of the day, a developer is a problem-solver, not a code-writer.

I myself have seen code with "database results" hardcoded, because the development team had no idea how to make it work, so they did it to "buy some time". This is the danger of setting the development standard too low and undervalue what developers must be.

Coming back to teaching programming in schools, it won't teach kids neither how things are really done, nor how computers work, but paint a world where developing is like playing, devaluating even more the image of IT professionals and with the years will come the moment where everyone has "developed" at school.

It's been a long fight we, Computer Engineers, have loose to avoid intrusiveness. Maybe easily achievable first-results and the use of a single tool for creating and consuming are to blame. Most of us have lived situations where, when telling people we are Computer Engineers, someone says "my nephew does webpages". It's like saying "my nephew is a butcher" when talking to a surgeon...

Anyway, the Pi is a great device for fast device prototyping. We have used it in the Armiga Project and it helped us improve the product in a very tight timeframe and are willing to put put our hands on the new model :)

jueves, 2 de octubre de 2014

Better by design: when less is way more

As technology evolves and cheapens, more and more people jump into its bandwagon. Technology offers a new world of possibilities and enhance our lives. But does it really come cheap?

In the early days of computing, computers where huge machines built by engineers for engineers. They were far from user friendly and no average Joe could even turn one on (they didn't want, either). As they became smaller and cheaper, a new concept emerged: the personal computer. A lot of criticism came with it, as many people doubt about the need of having a computer at home. Those computers, however, were still not too user friendly, with command line interfaces (CLI) and a limited set of commands you should keep in mind in order to use them.

Evolution continued to the GUI (Graphical User Interface) and our beloved mouse. Now, computers were really and truly intuitive and user friendly; or weren't they?. With Windows 3.1, Microsoft started its conquest of the world and I remember spending lots of time moving windows around, trying different color schemes, playing solitaire and minesweeper, because there wasn't much to do..., for me. However, it was a breakthrough in terms of office productivity. With each evolution of Windows, the applications that run on it became more powerful and more complex. Menus, sub-menus, nested-menus, drop-downs and standalone panels when there were too many options, macros and even scripts!. Nowadays you can do a ton of things! (if you happen to know how to do them).

It would be shocking to know how much of current devices power is being used, compared to early years ones. I think that Pareto's law applies here and 80% of users use 20% of the power of their computer (hardware and applications) and I guess it's just because they are huge. The concept of "more is better" is to blame here. It's been shown that the broader the options, the less happy the average individual is about the choice.

During the early cellphone days, people spent lots of time choosing the ring tone, alarm music, background color... Then came the first smart phones, with Symbian, WindowsCE and PalmOS. Now you could spend days playing around with options!. Android 1.X was the pinnacle of options...

However, the iPhone arrived and it was a truly "better by design" breakthrough: simple, functional and with less options. It freed the user for the task of thinking how the wanted the things to be. Things were the way "they should be". And orthogonal, simple and logical design that everybody could understand. In fact, Android has evolved that way, simplifying most of the things, both in the general interface and in the settings... However, Apple has gone the other way around...

Another great example of this paradigm is Nest thermostat. The user just sets the temperature every now and then and the device cares about the pattern. It just makes the technology vanish up to the point of almost disappearing. "Better by design" is, as Jony Ive said in the original iPad introduction video, when you don't have to change to fit the product; it fits you. It's a pitty Apple completely lost the focus after Jobs passed away.

Apart from the technology, bad design principles apply all around us, as Donald A. Normam describes in one of the best design books: The design of everyday things.

In my years of experience, I've seen one major cause of poor design: design made by engineers. We tend to cover every possible scenario and possibility. However, "better by design" implies simplifying and thinking the best way of doing what the product is intended for, not providing support for all the possible ways of use.

So next time you find yourself developing a product, make a favor to your users and think about how it will be used before starting :)

miércoles, 6 de agosto de 2014

Agile..., but not that much

It's been long since I first went into University, for studying Computer Engineering and things have changed a lot over time, but not always for the good, I guess.

During the first years, we were taught general sciences, programming, problem solving... And slightly after that, the classic project lifecycle. We didn´t come into project management, but we knew the classic lifecylce phases and project structure. Viability documents, analysis, requirements, technical design... The waterfall approach at its best.

You are taught that every stage is like a blackbox, sandboxed and has well-defined interfaces that link the phases together. Everything is clear and tidy; nothing can go wrong, right?

We you start working with this approach, you'll sooner or later realize that it's far from being perfect. Some times it's even unusable and some of the premises are not even true!. For this method to work, every participant in the project (including clients and suppliers) must be exactly in the same page and know the whole system, so nothing can come after a phase is closed. So, when taking to the real world, it just doesn't fit.

Most of the times, the picture is much like this one:

After suffering this incoherency, some clever people came up with new approaches that are widely used nowadays, under the common name of agile methodologies. I myself am certified SCRUM Manager and it's been of great help during the last years.

However, I've found an alarming amount of engineers that are "agile extremists" and don't see the risk of completely eliminating traditional methods. This sums up to the overwhelming amount of new startups, born from the ideas of non-technical people that want immediate results in order to try to rise money. MVP (Minimum Viable Product) is a key word in the startup world and the fastest way to get to them is thru fast prototyping.

I've work in both a methodology strict company, which complied and applied a big bunch of methodologies like Prince, ISO 9001 and 27001, CMMI3 and ITIL and in a company with no methodology at all. This last one could be a extreme example of Lean and how it shouldn't be. Even though I've never been a friend of documentation, I have to admit the first one worked better. However, it started to work at its best when we re-defined the traditional waterfall stages as a cycle, with the introduction of SCRUM.

The major risk is, in my opinion, when you try to manage your company in an "agile" way. You have to be quick, you have to be nimble and you have to be agile, but before it all, you need to know where you stand. It's a common error I'm seeing very often, when a startup with non-technical founders, want to have a MVP as soon as possible and, for doing so, they look for developers, so the product is done ASAP.

It doesn't matter how senior a developer is, the first thing a technology-related startup needs is an architect. Someone with a wide and long-term view to set the basis. It doesn't matter much how you implement it, whether it's JAVA or PHP, Rest or SOA; after all, most likely you'll have to re-engineer your project in a 10-year timeframe, but your product has to be thought before it's done.

Every company need a backbone and a structure and a manager cannot be improvising because the company lacks a method. So my humble advise is to take agility with a grain of salt and use it when it's effective: in the production cycle. Be agile, but don't be extreme.