Showing posts with label Integration. Show all posts
Showing posts with label Integration. Show all posts

Saturday 23 July 2016

The PayTV Platform VS Product Debate

This post has been on my TODO list for four years. It started as trying to clarify the role of the STB: Is it a Platform, or is it a Product?? I now feel this debate needs to be extended beyond the STB domain, and instead focus on the bigger picture - the end-to-end video technology stack. And in order to do this, one has to start right at the beginning - understanding the future design of a PayTV business...This blog post is a work in progress, my thoughts are not fully structured and far from complete...I needed to get this thing off my Trello TODO list because it was starting to burn a hole in my screen, so here goes...

The topic of what drives innovation in software companies in terms of understanding the differences between a Product versus a Platform is not a new one. It has been discussed since the early 90s in lead up to the evolution of PC Operating Systems & Applications, early 2000s on the dawn of the Internet age, mid-2000s on the rise of mobile platforms and the resurgence of Apple. Citing a few articles that discusses this topic of Platform/Products, by Michael Cusumano, a leading expert on the subject:

However, in the context of Digital TV, Pay TV or what the market is now calling "Video Entertainment", I feel not much has been written about this subject. This is probably because of the largely historical nature of these technology & architecture platforms being proprietary, in what has been quite a much-guarded and competitive landscape by technology vendors traditionally in the Set Top Box middleware space.

I believe the very same challenges that existed twenty years ago in the PC-world are really no different to the challenges Pay TV software systems face today. Whilst Pay TV exists to generate as much revenue from its subscribers as much as possible (one really can't argue with that bottom-line), one cannot ignore the fact that there is an enormous amount of software systems that power the business (consumer devices, broadcast systems, security & encryption software, billing & subscriber management systems, and more recently internet-backend services that deliver digital services on additional devices other than just set top boxes) so much so, that one could even argue that Pay TV is fundamentally, a software technology-driven business - and since this is pretty much the reality of today, then the topic of Product v Platforms in PayTV is all the more relevant!

Historically, the medium of consumption of PayTV, was through a device called a Set Top Box (STB). A piece of hardware that allowed a customer access to the content broadcast. The STB offered rudimentary user interface (an application / electronic program guide) that primarily allowed users to find content to watch - and this basic use case can still be considered the primary use case for PayTV, despite the bells and whistles that come with flashy graphics. The interface exists to allow users to find and watch content, period.

I believe the STB, in terms of software architecture, has to evolve from the once historic notion of being viewed as a standalone product, to instead becoming part a platform ecosystem, just like any other modern operating system (iOS, Android, etc.) that exposes a set of capabilities that allows for re-use of software components across devices. That, with the increasing convergence of mobile/internet/broadcast, the STB can no longer be considered its own island, with its own unique software stack (some might argue that these stacks are archaic), somewhat silo'ed & isolated from more modern technologies as in the case of platform software driving pretty much the mobile (smartphone / tablet) devices.

I have worked in the STB world for pretty much most of my career. I've always felt that we were really reinventing software systems, architectures and design patterns that was pretty much known since the sixties (really). In the last eight years, we started moving to the converged world, merging broadcast-and-internet TV, and in doing so, the constraints of the STB world start to reveal itself, especially when it came to inter-networking features, light-weight components and services-based design patterns and the challenge of essentially re-using software components across different devices. I've seen companies and teams struggle to adapt to this new challenge, especially when it came to the topic of truly understanding what it means for differentiating a Product or a Platform?


Is the Set Top Box a Product, or is it a Platform??

An example STB Platform
I believe the days of the STB as a unique device for viewing content, the primary means of interaction, and the constraint of being stuck in the world of broadcast TV - are over.

Customers don't care what a PayTV operator labels & markets its device as. The STB is simply, a means to an end - a thing that lets you connect your TV to access a world of content provided by a PayTV operator through a transport medium (satellite, terrestrial, broadband, whatever). And that in the connected world of the internet, customers expect to access the same experience on any mobile device (because it is possible and it's expected).

So initially, in isolation, just looking at the STB alone - I view the STB as providing platform software, just like iOS or Android.
From a hardware perspective, the STB device hardware too, is also a platform that allows the PayTV operator to incrementally add features over time. The hardware usually shares future-looking capabilities that STB software can realise over a period of time (usually 5-7 years, but this time is being reduced to more like 3-5 years shelf life).

This may sound like commonsense, because it is what consumers are used to with more modern  iOS & Android platforms, right? Yes, but in the world of PayTV, I've seen how technology implementations can go wrong, costing PayTV operators lots of time, money and energy, lost opportunities, lose market share, etc - due to not taking time to properly assess the nature of their products and services as well as the underlying technology ecosystem needed not only to satisfy their current business needs, but also the systems to power & drive future growth (at little cost, avoiding long development cycles, re-work and silo'ed ring-fenced deployments making maintenance a nightmare).

I also believe that we need to go further than just thinking in the STB-world - we need to think about a platform world, a world of re-use. The same applications and user experience must exist on all devices, regardless of technology domain. If the customer is expecting this unified seamless experience, in the same way the software components need to be shared across the architecture domains. I don't expect there to be separate application development teams per device. What usually happens is a PayTV operator has separate vendors, with their own stacks, a development team for STB, another development team for iOS, another team for Android, another team for PC/Web - all implementing pretty much the same thing. And similarly, separate infrastructure services teams (broadcast headend, internet online backed teams) usually fragmented, working in silos, often with duplication of services.

So the question for me has evolved: it is no longer about the STB being a product or a platform. It is about the End-To-End PayTV Technology Platform - how should a Video Technology Ecosystem be structured today?? Out with the old, in with the new is what I say ;-)


Why understand the difference between a Platform & Product?

Monday 16 February 2015

Non Functional Requirements and Sluggishness


There is a lot of talk about whether writing code, or creating software, is really an art, a craft, or is it a highly disciplined, software engineering discipline. This post is not about this debate, although I do wish to make my stance that great software is just as much it's a craft as is a software / systems engineering discipline. One should not discount the analysis, rigour that should go into the design of any software system, be it relatively simple 4th/5th tier high level applications built on very easy frameworks (e.g. WebApps, Smartphone & Tablet frameworks), or very low-level, core, native software components (e.g. OS kernel, drivers, engine services).

Take the Set Top Box (STB) for example, a consumer device that is usually the central and focal point of the living room, people expect the thing to just work, to perform as well as any gadget in the household, being predictable, reliable and always delivering a decent user experience.

How then does one go about delivering performance in a STB software stack? Is it the job of the PayTV operator to specify in detail the NFR (Non-Functional Requirements)? Or is it the job of the software vendors (Middleware / Drivers) to have their own internal product performance specifications and benchmark their components against their competitors? Does the PayTV Operator benchmark their vendors based on the performance of their legacy decoders in the market, for example: this new product must be faster than all infield decoders by a factor of 20%. Or is their a collaboration between the technical owners (i.e. architects) to reach mutually agreed targets?

Take the problem of Sluggishness. According to good old Google,
sluggish
ˈslʌɡɪʃ/
adjective
  1. slow-moving or inactive.
    "a sluggish stream"
    synonyms:inactivequietslowslow-movingslackflatdepressedstagnant,static
    "the sluggish global economy"

When we're field testing STBs, over long periods of time (or sometimes after not-so-long-usage), people & customers report:

  • The box got very sluggish, needed to reboot
  • The STB isn't responsive to remote presses, takes 4-5 seconds to react 
  • Search takes too long and I can't do anything else but wait for search to complete
  • Channel change is slow compared to my other decoder
  • When using the program guide or navigating through the menus, the box gets stuck for a few seconds as if it's processing some other background activity - it's very irritating to my experience

This feedback, whilst extremely useful to the product team, is very hard for engineers to characterise without having access to system logs, to trace through the events that led up to the slowdown in the performance that resulted in degraded user experience.

In my view, based on my own experience of writing STB applications and managing complex systems projects, I believe that unless we quantify in fair amount of detail what sluggishness means, then it's a cop out for passing on the buck to other parties: either the product owners didn't take time to functionally spec out the requirements, or the application is using API services that it doesn't have control of...

In the remaining part of this post, I will touch on an example of a previous project of how we handled the problem of sluggishness, and this was through a process of rigorous Non-Functional Requirements focused on Performance Specifications...

Tuesday 27 January 2015

To Ringfence or not to Ringfence?


I have been running systems & software engineering projects for some years now, the topic of "ringencing" teams seems to be quite a recurring one. It often comes up during the early stages of startup/initiation, or later into the project when stakeholders get quite nervous and edgy when the project begins to show signs of being in distress (i.e. the launch or completion trajectory doesn't look so good). One of the natural instincts for such management teams is to tighten up control, for example to ringfence all people with central command control within one single delivery team.

So what does it mean to ringfence anyway?

According to Google, enter "define ringfence" gives you:
ring fence
noun
noun: ringfence
  1. 1.
    a fence completely enclosing a farm or piece of land.
    • an effective or comprehensive barrier.
      "he did not say when the merger would be completed or when the ring fence would be abolished"
verb
verb: ringfence
  1. 1.
    enclose (a piece of land) with a ring fence.
  2. 2.
    BRITISH
    guarantee that (funds allocated for a particular purpose) will not be spent on anything else.
    "the government failed to ring-fence the money provided to schools"

I have been on both camps: first as an engineer being assigned to support a project/customer; and later as the manager driving ringfencing agreements with vendors.  As a software engineer, I didn't fully appreciate being assigned to a customer "You will work on only Customer A and nothing else. If you don't have work to do, speak to the customer project manager. We are getting paid exclusively to support this customer, so when you need to fill in your timesheets, we need to know in detail the tasks being billed for. Any item not having direct relevance or is not a direct requirement for the customer will not be billed for". In the product services development business (which is where I spent the bulk of my software engineering career in), this isn't strange at all. Teams get assigned to customer, we work at the behest of the customer, taking work requests in via the customer delivery manager. It does get a bit more challenging though, when your product on offer is actually designed in such a way to be generic with some customisation, such that you only need just one core development team to satisfy, simultaneously, the requests from multiple customers (at the same time), in which case, a ringfenced team makes less sense...but hey, we're being paid to do the work, without the customer, our product is not going to sell, so if customers wants a ringfenced team, we'll give 'em ringfenced teams ;-)

Later on in my career, I started managing software product development, as a team of product development managers, we were often faced with the requirements from customers to guarantee "resources" to their projects. So during the initial start-up, planning & contractual negotiations, we would roughly estimate the project's backlog, and the impact on resource headcount, guaranteeing the customer will have X number of heads ringfenced for the entire duration of the project. The customer would have first right of refusal if and when a situation arose that we'd need to support new customers, or share development work amongst other projects, at the risk of reusing some of the people originally ringfenced and committed.

As a team responsible for product development, the whole point of having a common core product, is that we can easily reuse the same product, with minimal customisation required to deliver on multiple customer projects. Take for example, the business of set-top-box (STB) middleware or the STB application that we sold to many a Pay-TV operator. Using a common product development team, one team would support multiple customers, via one product release. In the case of STB application, if designed correctly, the major differences lay in the look and feel, and custom business logic workflows, the engine components were essentially the same. So we, as a product development team, could with reasonable confidence, guarantee to customers A/B/C, that from a development perspective, the need for a ringfenced team is really undesirable, causing us too much pain - which I will go into later.

However, on the other side of the fence, more recently, I now sit as the customer! When donning the hat of the customer, I am just more comfortable when I have better control, ideally all control! If I'm guaranteed a ringfenced team, the "resources" are totally under my control, focused and dedicated to my delivery. After-all I am paying the vendor to deliver, and I want total control of the delivery team. The better control I have, the better chances of delivery....plain and simple, accepting the challenges being placed on the development teams. What helps is that although I can be the hard customer, I do have a pretty good idea that the development vendors can overcome their perceived challenges and difficulties with the ringfenced approach, because, I myself was in the trenches and made it work. So I often find myself explaining component-architecture and single trunk-codebase management, multiple release streams management, to the technical teams.

And really, when you pause for a moment, and think about the big picture - you just have to appreciate both sides of the argument. The customer wants more control as this will help him/her sleep better at night, the development vendors would like to be left alone, trusted to deliver their product, as they know best how to run their development streams. Trust, however, has to be earned, and I as the customer, need to have a very good relationship with the vendor, as well as the vendor must have proven itself on more than one occasion, that they can come to the party and deliver...If there's been some turbulence in the past with non-delivery, you can be sure that I, as the customer will insist on the ringfenced team (the voice of the software engineer inside me head is silenced by reality of business needs)...In the end though, Delivery always wins!!

Let us look at an illustration the essentially captures how I approach this subject of ringfenced teams:

Friday 25 July 2014

Core competencies/behaviours for System Integration Engineers & Release Managers


I have written in the past on system integration topics, you can click on "Integration" in the tag cloud... 

It is interesting how some of the topics keep recurring throughout ones working life, recently, I've had to observe, deal, control and to an extent mediate on issues that shouldn't really have happened - this is in the area of Software Integration & Release Management, which in short, is not an easy role to take on in any project, let alone one with a hard delivery expectation with the whole world watching...people become stressed, nervous, emotional with knee-jerk reactions, etc, etc.

A major emphasis of Release Management & Software Integration/Delivery is on managing stakeholder expectations, above and below. To do this one expects a certain level of core competencies, which I believe is around these core topics:
  • Great judgement when it comes to soft, people skills (be weary of pissing people off, triggering emotive responses & causing undue panic)
  • Clear, unambiguous communications: knowing what to communicate & when (filter out reports, allow time for things to settle)
  • Measured, personally self-motivated & control (it isn't easy being under fire all the time)
I've sketched a picture of what this world looks like, it shows just the world of set-top-box integration, it gets more intense as you go through to the worlds of end-to-end integration, where there's multiple systems involved (not just the STB), multiple customers, and way more front-line fire you have to deal with:
The picture should say it all:
  • Release Managers are on the front line when it comes to committing to a project delivery timeline. They front many customers, including some senior stakeholders. Release Managers work closely with System Integration (SI), in fact, should rely heavily on the output from SI
  • System Integration not only front hard timeline commitments from the Release Managers, but also, have a direct line-of-sight to stakeholders. To be in SI, you need to have competencies not only on technical toolset, but a fair amount of personal qualities, including very clear communications style, to front queries from stakeholders.
  • The Development vendors are usually shielded from a lot of the fire that SI face, although these vendors must deliver quickly and resolve the burning issues in a timely manner. Some projects allow for direct access to development teams by stakeholders, however, it is not always the best strategy. SI is usually the protector and conduit, often buffering the development teams. Release Managers still manage vendor planning though.
What do I expect, as a minimum, behaviours of a System Integration Engineer??
  • DRIVEN - I think this sums it all!
  • Self starter, ramps up in no more than two weeks delivering value
  • Grit, rigour & diligent, hardcore problem-solver
  • Not easily demotivated, although big enough to seek help when truly stuck
  • "To be Blocked is to admit defeat!" kind-of-attitude
  • Go the extra mile to solve problem
  • Be able to take a high level problem and run with it, even if the problem is unconstrained & unbounded
  • Passionate about problem solving and debugging
  • Clear communicator at all levels
  • Works well under pressure & stress
  • Comfortable with client-facing interactions
  • Doesn't take "if it ain't broken, don't fix it" too literally. Expect tools, scripts to be created to add value, instead of accepting status quo
  • Must grasp source code / software design patterns quickly, even if it has to be done through a process of re-engineering
  • Does not have to be told twice what to do
  • Dependable, ability to work with multiple tasks concurrently often with competing priorities
  • Ability to work with third parties, including managing & directing external parties
  • Does not depend on too much guidance from team lead or project manager
  • Thick skinned to pick up on nuances of politics and management dynamics 
  • Does not panic, is in control of emotions, tackles problems in a clear, calm manner
  • Fully committed to delivery expectations of project, striving to keep customers happy
What do I expect, as a minimum, from a Release Manager??
  • All of the above traits from SI plus:
    • SENSIBLE sums it up!
    • Level headed.
    • Keen eye for detail, but not to the extent of micromanaging teams
    • Must understand the boundaries, limits of influence (sphere of influence)
    • Communications must be unambiguous - do not invite panic!
    • Must be able to sit in hot seat with calm composure
    • Must intervene tactfully & gracefully in conflict scenarios within the team but also more importantly with third parties / vendors
    • Must be familiar with people management - take time to get to know people on the team
    • Very good listening - be sure to understand what technical people are reporting / expressing, filtering out noise
    • Not to escalate unnecessarily
    • Be prepared to say "I am not sure, but will find out"
    • Be prepared to protect the teams - avoid dictating, be cognisant that people have personal lives, and may have baggage to deal with
    • Seek out guidance when not sure, speak to peers & colleagues who may have experience to help you along the journey
    • Must be capable of realistic strategic planning, but not in isolation of the team
    • Should have a led a team of skilled engineers of 5-12-20 people in the past with at least 3-5 years experience as a team leader
    • Should have been involved with development / integration with a realistically accepted time frame (at least 3 years)
    • Should have software domain knowledge in terms of best practices, even if its from other similar software systems
    • Must be able to lead and manage all types of people
    • Must be able to plan using past experiences & learnings
    • Must use knowledge & experience of own proven practices & processes to up skill team if processes are lacking or immature

Monday 19 May 2014

Achieving continuous product delivery in Set Top Box (STB) software projects: Challenges & Expectations


The story all software product development teams face at some point in their journey, not necessarily unique to digital Set Top Box (STB) software development: So you entered the product development space, setup a small team that delivered the unimaginable - you got your product out in record time. Customers are pouring in, marketing & strategy are imposing an aggressive roadmap. You've set the pace, possibly surviving through possible death-march-like project deliveries, and now you're expected to do more: Scale! You need to scale your product development & operations to support an aggressive business roadmap, the competition is intense, you're charged with figuring out how to deliver releases to market as frequently as possible in one calendar year - so you set the target of a releasing every quarter, or rather, inherited a Product Roadmap that showed Market Launches every quarter…

You barely survived getting one release to market with quite a small team, how do you scale your operations to satisfy the business demands? What behaviours and habits do you change? What changes can you make to your current practices? You must've been using some form of Lean/Agile technique to meet the original aggressive delivery, is the process that was used before enough, can it scale to get you from where you are now, to where you want to be? What are the implications of achieving continuous product release flow, in an environment, that is typically unique to Set Top Box hardware & software development?

In this post I highlight just one possible framework that cuts through all the areas involved in product engineering of Set Top Box software releases. I will not go into detail behind the intricacies of this environment (search my blog to learn about that) - instead I will map out by using a few pictures that shows the scenarios around achieving continuous product releases to market.

The pay TV space is becoming highly competitive, especially with the likes of online / OTT (over-the-top) players like Hulu, Netflix, Amazon, etc - such that traditional operators are hard pressed to up their game from providing the clunky, almost archaic standard TV experience to a more slicker user experience, offering advanced features, integrated with traditional linear services with additional services aimed at stifling the modern competition. No longer can these traditional pay TV providers release new software once a year at least, users having become used to frequent updates with new features on their smart devices every few months, people are becoming more and more impatient with bugs that go unfixed for some time…Nowadays, it is expected that set top boxes are updated with new features as often as possible. 

With this in mind, how does one structure a product engineering department to cope with this demand? How should projects be managed? What do we expect from teams? What are the implications of reaching continuous delivery? Where does a typical agile/scrum team fit in? Is Scrum even a preferred method, or do we choose a hybrid model?

[This is still very much a work in progress, made public to get feedback/advice from other practitioners out there...]

Sunday 2 March 2014

Applying Google's Test Analytics ACC model to Generic Set Top Box Product

Early this year, I recently completed, what I felt was the best software book I've read in a really long time: How Google Tests Software by Whittaker, James A., Arbon, Jason, Carollo, Jeff ( 2012 ) (HGTS).

Although a couple of years old now, and even though the main driver James Whittaker, has left Google and gone back to Microsoft, this book is really a jewel that belongs in the annals of great Software Testing books, really.

The book is not filled with theory and academic nonsense - it speaks directly at practitioners of all types and levels involved in Software Engineering. Straight-forward, realistic breakdown of how things go down in software product teams, the challenges with fitting in Test/QA streams (actually there's no such thing as fitting in QA/Test, it should always be there, but the reality is that developers don't always get it), balancing the needs of business in terms of delivering on time against meeting the needs of the customer (will real customers use the product, is it valuable), etc, etc.

Please get your hands on this book, it will open your eyes to real-world, massively scaling software systems & teams. Honestly, it felt quite refreshing to learn Google's mindset is similar to mine, as I'd been promoting similar testing styles for much of my management career, being a firm advocate that developers should do as much testing upfront as possible, with Test/QA supplementing development by providing test tools & frameworks, as well as employing a QA Program Manager (known as Test Engineer in Google) to oversee and co-ordinate the overall test strategy for a product. I have written about QA/Testing topics in the past, the ideas there don't stray too far from the core message contained in HGTS.

The book shares a wealth of insider information on the challenges that Google faced with its product development & management as the company scaled in growth in both its use of its products as well as the number of people employed, working across continents, explosive growth and large development centres across the world. It is filled with interviews with influential Googlers that give some insight into the challenges and solutions adopted. You will also find information on the internal organisational structures that Google implements in its product development teams, along with some useful references on Open Source Tools that have been borne out of Google's own internal Testing Renaissance, now shared with the rest of the world for free, thanks Google for sharing!

One such tool is Google Test Analytics ACC (Attribute, Component, Capability) analysis model - which is the topic I want to focus on. In the Set-Top-Box projects that I run, I look to the QA Manager/Tech Leads to know the product inside-out, to develop an overall Test Strategy that outlines the ways and methods of testing that will achieve our Go-To-Market in a realistic & efficient manner (and not adopt process-for-process-sake). I generally look toward usability and risk-based test coverage, seeking out a heat map that shows me in broad strokes, the high level breakdown of the products feature set (i.e. what is it about the product that makes it so compelling to sell), what the product is made of (i.e. the building blocks of the product), and what the product does (i.e. the capabilities or core feature properties). We do this generally in Microsoft Excel spreadsheet, manually importing test data from clunky test repositories. What I look out for, as a classic Program Manager, the RAG (Red, Amber, Green) status that tells me the fifty thousand foot view, what the overall quality of this product is, and how far away we're from having a healthy launch-candidate system.

It turns out the Google do pretty much the same thing, but are way more analytical about it - they call it ACC, according to the book written in 2011/12 "ACC has passed its early adopter's phase and is being exported to other companies and enjoying the attentoin to tool developers who automate it under the "Google Test Analytics" label".

So I've created my own generic Set-Top-Box project based on Google's ACC model, aiming to share this with like minded people working on STB projects. It is not complete, but offers the basic building blocks to fully apply ACC in a real project. I think this will be a useful learning for anyone involved in QA/Testing role. 

My generic-STB-Project is currently hosted on the Google Test Analytics site, along with a complete ACC profile. It took me about four hours to do the full detail (spread over a few days of 10 minute tea breaks, between meetings), and about one hour to get the first draft of the High Level Test Plan with at least one ACC triple filled in. The book challenges you to get to a High Level Test plan in just 10 minutes, which I actually I think that is quite doable for STB projects as well!! 

In about four hours, which is about half-one working day, I was able to create a Test Matrix of 254 core high level test scenarios (note I'm really not a QA/Test expert so imagine what a full time QA/Test Engineer could knock up in one day?) that looks like this:

Capabilities & Risk Overview of a Generic Set Top Box


Google currently use this model in their high level test plans for various products, like ChromiumOS, Chrome Browser & Google+. It was developed by pulling the best processes from the various Google test teams and product owners, implementation pioneered by the author's Whittaker, Arbon & Carollo. Taken from the book. Test planning really stops at the point you know what tests you need to write (it is about the high level cases, not the detailed test scripting). Knowing what to test kicks off the detailed test case writing, and this is wherer ACC comes in (quoting snippet from Chapter 3, The Test Engineer):
ACC accomplishes this by guiding the planner through three views of a product, corresponding to 1) adjectives and adverbs that describe the product's purpose and goals, 2) nouns that identifiy the various parts and features of the product, and 3) verbs that indicate what the product actually does.
  • A is for "Attribute"
    • Attributes are the adjectives of the system. They are the qualities and characteristics that promote the product and distinguish it from the competition. Attributes are the reasons people would choose to use the product over a competitor.
  • C is for "Component"
    •  Components are the building blocks that together constitute the system in question. They are the core components and chunks of code that make the software what it is.
  • C is for "Capability"
    • Capabilities are the actions the system performs at the command of the user. They are the responses to input, answers to queries, and activities accomplished on behalf of the user.
Read more to find out how/what my ACC model for Generic Set-Top-Box looks like (first draft!)...


Saturday 15 February 2014

Sticky Metaphors for Software Projects: Bankable Builds, Surgical Mode

In this post I share an experience from a recent project where I influenced the senior management team to change strategy & tactics, by moving away from the standard Development->Integration->QA Team 1->QA Team n (which lasted anywhere between 2-4 weeks) approach to a more controlled Agile, Iterative/Lean approach of weekly Build & Test cycles, to a build-every-three-days, in order to make the deadline (that was fixed). We did this entirely manually, where our Agile processes were still coming of age, and had little in terms of real-world continuous integration. What I wanted to share really, is the way in which I influenced and got people to change their mindset, by using simple metaphors that were actually quite powerful, and can be used to sell similar stories:
  • Bankable Builds - Reusing concepts from The Weakest Link
  • Surgical Mode - Taking a page from medicine - Surgery is about precision, don't do more than what's required
It is interesting that these concepts really stuck with the entire department, so much so that it's become part of the common vocabulary - almost every project now talks about Bankable Builds & Surgical mode!

The post is presented as follows:
Note: Like any high-profile project with intense management focus, the decision to make these changes came from senior management, which put a spanner in the works with respect to granting our "Agile" team autonomy & empowerment. Although a little resistance was felt at the lower levels, the teams nevertheless remained committed to do-what-it-takes to ship the product. The teams and vendors did share their concerns in the retrospective feedback that the one-week release cycle was quite problematic to the point of chaotic with large overheads.

The facts can't be disputed though, that our agility was indeed flawed from the get-go by not having a solid continuous build, integration & regression test environment (lacking automation) in place. It was a struggle, with a lot of manual overhead, but it was a necessary intervention, that got us to launch on time. Not having done so, would have meant us missing the target deadline, the project was too high-stakes to ignore this intervention.

Going forward however, the principles of bankable builds is still a solid one, one that is well known to teams with mature software processes, I can't emphasise the value of investing in up-front automation with continuous build, daily regression testing & automated component / system tests will help make life so much easier for software development teams.

Invest time and effort getting the foundations set up at the expense of churning out code, continuing to do so will build a huge backlog for QA down the line, teams ending up spinning-their-wheels in putting out fires that would've been avoided, had the right discipline been set in place.

Tuesday 3 December 2013

Summer Reading List over December


Since returning home* to South Africa in June 2011, life has been really pretty damn hectic, they say relocating your family & household contents is one of the most stressful things you can do in life - added to that, working with a new company presented itself with a massive culture shock. So I had my hands full with handling not only work but also family transitioning to the new life. Not to mention I quit my permanent job and ventured into consulting territory. This transformation has set me back someways: in particular, my reading list fell on the back burner, and it's only now that I'm starting to rev it up again. Another reason for not maintaining my reading as I used to in UK, is that South Africa is way behind in the online shopping space, the deals I used to get on Amazon were so great in terms of price (not to mention service!), that when I pick up books from the shelves in stores here in SA, or even online (Kalahari), I often cringe at the price tags (definitely cheaper in UK, wait). I still haven't made the leap to the Kindle yet which is the sensible thing to do (although the Pound/Rate exchange rate is crazy!), I was an early adopter a few years back, but I really wasn't impressed at the time - so still prefer my hardbound copies! I do have my eye on the Kindle Fire HDX though...

Here's a snippet of the handful of books piled on my bedside table for reading over the South African summer break:





Monday 4 November 2013

Nissan Motors Production Line and Software Systems Development

This week I was invited by @Farid from Crossbolt to tour the Nissan Production Line in Pretoria, to see first-hand the Lean / Kanban / Kaizen production model in action: Cars come off the line every four minutes (cycle time), Zero defects straight-through rate.

I jumped at the opportunity without hesitation. Although I've ran software projects as a factory-line model before, I had never actually been in a real car production factory before. Fareed had decided to take his IT Operations team on this tour, after doing the theoretical knowledge-share on Lean.

He felt it would make more sense for his team to witness first hand the theory in action, where the metaphors can be easily visualised and applied to an IT Ops / Systems Integration space. Not a bad idea...

This post is just a brief dump of my personal take-aways from the visit. I will try to expand on them in later posts once I've let the ideas play around my head for a while to morph into shape. Despite what some people might say: Software is a creative art, it can't be rushed and boxed into a production line, when it comes to pushing out consumer production software, I beg to differ: there are indeed many parallels from the manufacturing sector (which is seriously way more mature than software development) that can be drawn upon and applied to software teams - incidentally this is the roots of Scrum / Lean anyway - just taking the same concepts and applying it to software teams...

There's plenty of info on Kaizen / Kanban / Lean / Poka Yoke / Scrum - I won't go into detail here. For context though, the line we visited was a multi-model line. This means that the line produces more than one type of vehicle in a continuous flow. Any team on the line could be working on a different model at the same time. Some car production lines specialise in just one model, but this line builds at least four different models. Because the flow is continuous, a car comes off the production line every four minutes, like clockwork. The team working on the line therefore, are cross-functional and cross-skilled. 

Take-aways:

Saturday 13 July 2013

What it takes to cut the mustard in STB Systems Integration

I have had this topic sitting on my backlog since I first activated this blog, now is a good time to start this topic off since recently it has become quite a topic of conversation in the workplace. I will refine this post over time, however, the first installation is just to share my thoughts around the role of STB Systems Integration, its importance and touch on the traits/skillsets as pre-requisites for the role, i.e. What I lookout for in people when hiring a Systems Integration Engineer.

First of all, one needs to understand the context of being a Systems Integrator. In the context of Set-Top-Box software, there are a couple variations to the concept of integration. The core building blocks for this software stack are generally split into the following components: Hardware, Drivers/Operating System, Middleware, Conditional Access Client, EPG Application, Virtual Machine Engine and Interactive Applications. In terms of variations of SI, it essentially falls into two categories, not so different to "White Box" & "Black Box" concepts used in testing - however, it does have a direct impact on the level of influence & amount of topic-specific knowledge the SI team has or is empowered with, thus having a direct impact on the outcome of the project.

Ultimately, the PayTV Operator owns the full stack that makes up the product. The components are often provided by multiple software vendors. Assembling these components into a workable stack is left up to the task of a Systems Integrator (SI). SI acts as the mediator, the aggregator, the arbitrator - assembler of  components, puts the build together, runs smoke & sanity tests, including full functional testing, investigates defects and assigns such to offending components. SI controls the flow of releases, plans the release schedules and is ultimately accountable for generating the launch software, thus getting it to deployment / operational space.

In a previous post on SI is King, I stressed the importance of SI. I still maintain that the role of SI is in fact the most important role in a STB project, SI has high visibility and high impact, the gateway to success and the gatekeeper for maintaining the integrity of the software. Without a strong SI team, your project is really dead-in-the-water, project will thrash for months on end and will likely result in a project doomed for failure.

In terms of growth potential for engineers, I personally believe, through my own hard-won experience as an engineer myself, having been involved in various engineering roles; as well as from my experience in managing projects across several development & integration teams, that Systems Integration (SI) is the ultimate place to be in, in terms of career progression and should be a natural development path for engineers seeking the next level from software development...

The effectiveness of the SI team is governed by the teams ability to execute on expert technical knowledge thus implementing best practices, streamlining processes, operating as an efficient well-oiled machine. More importantly, the strength of SI is determined by the calibre of its contingent Engineers, the strength of its workforce.

In this post, I will share my thoughts around the following areas:
It doesn't really matter on the context of Black Box or White Box, however, I do feel that Black Box SI is often counter-productive and is really at odds with the intent of true SI, but in some projects or commercial agreements, this is sometimes unavoidable. SI engineers who shine in Black Box SI would probably excel even more if they had access to source code, build tools, etc. In the whole however, when it comes to requirements for SI engineers, they should be equipped to work equally well in any context.

Tuesday 2 July 2013

Programme / Systems Readiness Heat Maps


In the past I've written about the make up of the Digital TV (DTV) ecosystem or "value chain" making it quite clear that the system in itself is a complicated mix of different systems, often provided by more than one vendor. To recap, you can follow-up on these posts:
It is not very common for DTV projects to impact the end-to-end value chain, where implementing a new feature or major product launch touches upon just about every element of the system, but it does happen. It is complicated, but it can be managed. It takes an investment in effort, diligence, rigour, sense-of-strong-will, and patience to work with the different teams, to coherently come together in delivering the overall plan.

In this post, I share what I've come to find quite a useful tool: the Programme/Systems Readiness Heat Map.

It is well-known that human beings think best in pictures. As the well known works of Edward Tufte teach us, that visualisations are a powerful way of communication. A visualisation, if done correctly or smartly, can accurately reflect or communicate the state of "things" or "tells a story" so that almost anyone can just, from looking at the picture, get the message.

So, as a Programme or Systems Projects Manager, dealing with complicated technology components, and time lines are highly parallelled and intertwined, when it comes close to the final delivery stages of the project, what is the best way for you to communicate the overall status of readiness of the system?

Sure, one can use a series of PowerPoint slides, representing the status of each Work Package, focusing on the key delivery criteria for each component - this is usually what most PMs do anyway in reporting overall Project Progress via status reports. Your SteerCo have very little patience to wade through 30 slides of PowerPoint tables...

So, what the Heat Map does is simple: On a single piece of paper (okay, maybe the size is A3!), the heat map shows the state of the entire System / Program, focusing on the key criteria for each component that determines the fit-for-purpose state that your SteerCo & Executive can use as input into making the decisions for approving final deployment: Go Live. Essentially the Heat Map is represented as an n x m matrix, with n number of components to track, m the number of metrics/criteria that must be met to guarantee successful implementation to deploy. Each entry is given a value or an associated Heat Colour (Red, Amber, Green), that when all entries are filled in, the reader can quickly ascertain the state of readiness (are we hot or cool or in between?)

I will talk you through a generic template that I've used in my own programs...

Wednesday 22 May 2013

System Integration is King - Release Manager, Build Meister-Gatekeeper

I spent a lot of energy in the last week trying to (and succeeded in) convincing some senior managers about processes, that in my humble opinion, are tried-and-tested, well-established norms in the management of software: the art of Software Release Management - taking control of component releases, especially w.r.t. approving or rejecting bug fixes. It has been good practice in terms of establishing myself as a consultant-expert on all things related to Set-Top-Box Software Development & Integration. I spend a lot of my time promoting best practices and trying to win & influence people over, something I have to do to maintain focus and direction of the overall programme plan...

The point I want to make is that, in a STB delivery project, the System Integration (SI) team is really the GateKeeper, with full authority and accountability for producing a stable, functional release. This means that SI have every right to manage component releases coming in, especially with reviewing & approving bug-fixes. If SI are not happy with a component release, for example, the component team may have fixed more bugs than was required, or fixed far fewer bugs than was requested, or made any other changes that wasn't specifically requested by the SI team, then SI can quite easily reject the component's release.

I have written in the past on the following topics:
All of these areas become the focus of a STB project as we get closer and closer to launch mode. Recapping the template that most STB projects follow:
  • Develop / Integrate Features >>> Functionally Complete >>> Closed User Group >>> Wider Field Trials >>> Launch Candidate 1 >>> ... >>> Launch Candidate X >>> Clean Run >>> Deployment Build >>> Launch
As the system matures to reach each of these milestones, the defects are expected to get less and less. Obviously no product gets launched with zero defects! All projects can do is ensure as best as they can that almost all high impact, high severity defects (a.k.a. Showstoppers or P0s) are dealt with to the best of their ability; and are prepared for the inevitable influx of new Showstoppers from in-field, real world usage of the product - it is the reality, unavoidable. Very rarely do products launch having met all original quality engineering requirements, after all, this is an entertainment device, not a life critical device - so we can relax a little on quality objectives! :-)

So how should you control releases leading up to launch?? It's simple really, not complicated, and really not rocket science...

Monday 29 April 2013

Pragmatic Set-Top-Box QA Testing - Don't just trust Requirements-to-Test Coverage scripts


This post might be a little edgy - I think it's important to understand alternative perspectives around Set-Top-Box (STB) testing, how it is uniquely different to other forms of IT Testing; to understand the dynamics of STB testing that promotes agility and flexibility, whilst simultaneously understanding the risks associated with pragmatic testing.

The Headline:
DON'T BLINDLY TRUST REQUIREMENTS-TO-TEST COVERAGE MAPS

Last year, I wrote quite an in-depth paper on Effective Defect & Quality Management in typical DTV projects. It covered many topics, and touched briefly on the aspect of Project Metrics reporting. This post expands on the subject of QA Metrics tracking, focusing on how this reporting can help change the direction of the project, and instigate changes in focus to the overall QA efforts, particularly around Set-Top-Box QA testing. I advocate the project will change QA focus as stability is achieved with Requirement-to-Test coverage maps, to include more Exploratory, Risk-based & User-based testing.

Saturday 27 April 2013

Worlds of QA Testing in Digital TV Systems Projects

I have previously written about the Digital TV ecosystem with its complexities and challenges in defining the Architecture & Systems Integration spaces. In this post I will share some thoughts around the QA (Quality Assurance) / Testing problem space, as well as generally accepted strategies for solving these problems.

What really triggered this post (which is centred around E2E QA) was a recent conversation with a QA Manager who in passing commented that he's facing a challenge with the "End-to-End QA" (E2E QA) team, in that it takes up to six weeks to complete a full QA cycle. Now some might think this is expected, as E2E QA is the last mile of QA and should take as long as it takes. My response to this is that it depends...

It depends on which phase your project is: is it still in early development / integration testing - where components are being put together for the first time to test out a feature across the value chain? Or is the project well ahead in its iterations having already executed much of the E2E testing? It also depends on the scope and layers of testing the systems and sub-systems before it gets to E2E Testing. Has a version of the system already been promoted to live? If so, what are the deltas between the already live, deployed system compared to the current system under test?

[--- Side note:
This post is one of many to follow that will encompass some of the following topics:
  • Worlds of testing / QA
  • Is ATP / E2E testing the longest cycle than other system test cycles - or should it be the shortest?
  • High level block diagrams of architecture / interfaces / system integration
  • Visualisation of technical skills required for various aspects of testing skills required at various areas of the environment: Scale of technical - heat map
    • E2E QA is highly technical and should be considered as part of Integration, not a separate QA function
  • Which came first: the chicken or the egg? Headend first, then STB, or do both together - big bang?
    • How to solve complex systems integration/test in DTV?
  • What is end-to-end testing, what areas should you focus on?
  • What is required from a Systems ATP?
  • What kind of QA team structure is generally accepted practice? How does Agile fit in with this?
  • Can E2E Testing be executed using Agile - iterate on features End-to-End?
Side note --- end ---]

Thursday 7 March 2013

Overview of Open Source Software Governance in Projects

In an earlier post, I introduced the topic of the increasing use of Free & Open Source Software (FOSS) in Digital TV projects. This post provides a brief overview of the various areas to consider as part of managing open source software in projects.

I am pleased to share this as my first guest blog post, by Steven Comfort. Steve works as a Software Compliance Analyst, we crossed paths when I started searching for support & consultancy on implementing an end-to-end FOSS governance program. This is still a work in progress, but we'd like to share with you our take of our experience / learning to date - in the hope it would help others who might be thinking of doing the same...

Open Source Compliance
With the partial exception of mobile phones, the embedded device operating system wars can realistically be said to be over, with Linux in its various flavours emerging as the clear winner. Coupled to the proliferation of open source software stacks and applications, it is highly unlikely that any device you purchase will be devoid of open source software.

When Free and Open Source Software first started penetrating traditionally proprietary software solutions, many people were sceptical and the cliche "There is no such thing as a free lunch" were commonly heard. Hopefully this short piece will assuage those suspicions, because there is in fact a cost associated with using open source software. Put simply, this cost is associated with fulfilling the license obligations concomitant on that usage.

Tuesday 26 February 2013

Modeling Software Defect Predictions


In this post I will share yet another tool from my PM Toolbox: A simple method any Software Project Manager can use for predicting the likelihood of project completion, based on a model for defect prediction. This tool only scratches the surface of real software engineering defect management, but has nevertheless proved useful in my experience of managing a large software stack. For instance, this tool was borne from my direct experiences as a Development Owner for a large Software Stack for STB Middleware that comprised of more than eighty (80) software components, where each component was independently owned and subject to specific defect & software code quality requirements. I needed a tool to help me predict when the software was likely to be ready for launch based on the health of the open defects; as well as use it as evidence to motivate to senior management to implement recovery scenarios. It was quite useful (and relatively accurate within acceptable tolerances) as it offered perspective of a reality that more often than not, people underestimate the work required to get defects under control, the need for predicting project completion dates based of defect resolution rates, depending on the maturity of the team is often also misunderstood. 

I created the first instance of this tool back in 2008/2009, the first public version was shared in this post on Effective Defect & Quality Management. Since then, I've upgraded the tool to be more generic, and also significantly cut down on the number of software components. I still use the tool in my ongoing projects, and have recently convinced senior management in my current project to pay heed to defect predictions.