Empowering software engineering teams

Software engineering is a creative role. As software leaders we aspire to hire great engineers who can solve complex problems. How can we craft a culture, and implement practices to support empowering software teams and pushing as much decision making down to them as it makes sense to do so? Below are 10 approaches that can be utilised to accomplish empowering software teams.

1. Be clear your objective is to empower engineering teams

If empowering engineering teams is something new to the group, you need to articulate this change. Teams need to be aware what is changing, and why it is changing. Most importantly they must understand the value this change will bring to their role. In small organisations, with well formed teams, this could be orchestrated through a simple team meeting to explain the change and being open to feedback. For example:

"Our objective is to ensure teams own as much decision making as possible as they are closest to their products and platforms. We are going to achieve this by bringing outcomes to teams, and empowering you all to own the solution. Hold us (leadership roles) accountable when you notice this is not occurring."

However, in a larger organisation with many software teams, momentum often needs to be built before wide spread change. It can be beneficial to have early conversations with engineering leaders to understand thoughts/feedback, to locate early supporters, as well as to resolve major concerns before wider communication to the group. This will ensure you have enough support from your leaders regarding the change. You can also start the roll out with a small subset of teams who are open to the change as way to build early momentum and get a few wins on the board.

2. Bring outcomes to teams, not solutions

If you're an expert in your domain, or a seasoned employee, it's can often seem easier to unintentionally direct the team on exactly what needs to be done to solve a problem. For example:

"We need to support Bitcoin as a new payment method. You can implement it into [platform x], just extend [component x] and deploy it into our K8s stack. We can re-use the existing reporting features, and the UI should be pretty similar."

You might have implemented this capability before and believe you're fast tracking the teams progress. However, instead you could be limiting the teams problem solving capacity, and encouraging a culture where they will rely on you as their manager to always bring solutions. This results in an increased risk of losing high performing engineers, and creating a team of doers, not thinkers.

Instead, practice articulating the problem in the eyes of the customer, and let the team solution. For example:

"We need to support Bitcoin as a new payment method for our customers. As a business we need to convert all transactions into FIAT (not hold Bitcoin) so it integrates into existing workflows. We have a timeframe of 2 months to deploy a PoC."

After this, pause and let the team digest. Then ask:

"What questions do you have? What isn't clear? What more information do you think you need?"

If you have solved this problem before, state that. For example:

"I've solved this problem in the past, which may or may not still be relevant. I'm happy to share that experience with the team now or after you have had time to do your own research."

By explaining the situation in this way, you are setting context and boundaries, and allowing the team to dive into the implementation specifics. The benefits here are you are empowering the software teams to own a solution, and investigate options that you may not have even envisaged. You are not constraining the teams thinking. As a result, they may choose to implement this capability into a new standalone service to ensure functionality is more maintainable and more easily testable. They may deploy into a serverless architecture instead of K8s to simplify architecture and reduce running costs.

3. Coach stakeholders on how to bring outcomes, not solutions

An extension of the above that I've experienced many time the past which is harder to solve, is business experts coming to the team with a pre-defined solution. These are individuals outside of your direct line of control. They may be product managers, pre-sales engineers, knowledgable domain experts etc. It can sometimes be presented as, we signed up a new [customer X], and we need [API X] to be adjusted to have new fields [a, b & c] to satisfy their requirements, we have told them we can implement it before they integrate. 

These requests are typically not malicious in nature, however, can be problematic and time consuming to solve. Stakeholders tend to do this because they genuinely feel like they know the right solution and are saving the engineering team time. However, what they are missing, is software engineering teams can take a holistic view and more than likely solve the problem using an alternative approach. This alternative approach could be made relevant to all customers, or be solved in a more maintainable way to reduce technical debt.

Below are 3 approaches to coach stakeholders to bring outcomes to teams, instead of solutions:
  • Coach stakeholders on how to bring outcomes to teams
    • By actually investing your time to run training sessions to coach stakeholders on how to bring outcomes to teams (coupled with the 2 points below) will create significant process. Explain how to define an outcome, the benefits it has to the team, and importantly to them as stakeholders.
    • I've found that you need to run these sessions continuously throughout the year, especially when you identify stakeholders are reverting back to old habits.
  • Template out how to bring outcomes to teams
    • Bringing an outcome to a team requires a stakeholder to think about the problem they are trying to solve. By defining a template on how to document this in a succinct way can allow a stakeholder to quickly define
  • Document a DoR (Definition of Ready):
    • A well documented DoR can clearly articulates how an item of work should be presented to a team and in a format the team can work with.

4. Define what success looks like (how will the team be measured)

With more empowerment, comes more responsibility. This requires teams to understand how they will be measured on the success of their work. This could include a combination of:

  • A specific scalability metric is met (eg: TPS for an endpoint or service)
  • The solution brings the team 1 step closer to an architectural target state
  • The solution must be less than a specific monthly cloud hosting cost
  • It must meet a specific reliability or redundancy measure (eg: can support 1 datacenter going offline)
  • The work needs to be delivered within a specific timeframe

    Whichever measures you choose, it is important your measures are defined and communicated to the team who will be owning this work.

    5. Be upfront and transparent on boundaries

    Boundaries and constraints always exist, it is your role to communicate them to the team early. Delaying will almost always cost team cycles in re-solutioning, or re-developing. Boundaries to think about could include:
    • Is there a specific technology that needs to be used?
    • Are their resource constraints in another team that need to be worked around?
    • Are there specific internal/external platforms/APIs this project needs to interface with that have rate limited thresholds?
    • Is a specific platform being deprecated in the near future that teams need to be aware of, so as to not make unnecessary improvements to?
    • Is this a short term solution that will be replaced in the near term, hence not requiring teams to architect and implement a long lived solution?

    6. Be transparent on known timeframes

    There is nothing worse than empowering a software team to solve a problem, only as a leader to simply make the decision for them 2 weeks later when they have missed an unknown deadline. This will almost certainly erode trust. As a leader, it rests on you to clearly, let me repeat, clearly, articulate any known deadlines that exist when briefing your team. You should communicate if a deadline is a hard deadline (eg: marketing campaign launch), or soft deadline (eg: an internal deadline that could possibly shift).

    Even if deadlines seems too short, still communicate them to the team. This doesn't need to represent a negative, it can be used to allow the team to solve a short term tactical outcome to meet a deadline that cannot move, while working on a longer term solution without the added time pressure.

    7. Evangelise and drive end-to-end ownership

    Ownership of a product, a user journey, a platform, a business domain is in my view fundamental to empowering a team. A team needs to be accountable for as much of the SDLC as makes sense in a specific organisation. This represents teams owning their software architecture to development, their testing to deployment, as well as supporting their products in production.

    When teams have end-to-end ownership, it encourages them to continually improve their platforms and processes as they know the buck stops with them. They are the ones who will be woken up at 2am if the application crashes, they will be the ones who will have to resolve technical debt due to a decision made in the solutioning phase, they will be the team spending effort manually deploying an application. These risks all work as motivators to deliver scalable, maintainable and reliable products into production.

    8. Provide teams the time to solution to a problem

    Whether it's the team, a technical lead, a stakeholder, an architect, or a CTO, someone needs to invest time to understand the problem, explore possible solutions and devise a plan. I've witnessed organisations develop a perception that teams spending time solutioning to a problem is too disruptive, and would rather a role outside of the team do this work - leaving the team to "just cut code". This however, goes against empowering a software team to own their products and platforms. When teams solution together, they understand their products better, which makes future solutioning more efficient. It also encourages knowledge to be spread amongst the team, not held with a specific individual. Approaches to provide teams time to solution can include:

    • Encourage teams to leverage spikes. Spikes are used to provide time for teams to look at multiple solutions, explore in more detail and ultimately arrive at a more accurate next step and work effort estimate. Word of caution, in general, a spike should avoid creating additional spikes for teams. A spikes output should be a work effort estimate. Spikes should also be time boxed.
    • Encourage Proof of Concepts when there are too many unknowns. It is advisable to time box PoCs and have clear outcomes of a PoC.
    • Run team solutioning sessions (or solution review sessions), and encourage team members to prepare by bringing potential solution ideas to the group

    9. Ensure teams have the right skills and roles

    Although any team can be empowered, to setup a team for success you need to ensure the team has the right skills within the team. Every team could have a different makeup of skills and experience based on their remit and the organisation. Areas to think about include:
    • Do you have the right balance of graduate to senior engineers within the team?
    • Do you have the right skills needed within the team to satisfy QA as part of DoD?
    • Do you need a dedicated role for cloud infrastructure in the team?
    • Do you have experienced individuals who can guide in complex software architecture decision?
      • This doesn't have to be a software architect, perhaps a principal engineer or staff engineer
    • Do you have product management or product ownership within the team?
      • This can be matrixed in to avoid organisational structure changes

    10. Keep your hand on the pulse

    Ultimately as a software engineering leader you are accountable for your teams effectiveness and delivery. Just because you are empowering teams, doesn't mean you can sit back. Your roles is to be aware on when your team is spinning wheels, becoming distracted, or solutioning to the wrong problem. Your role exists to jump in and provide guidance, be there to help unblock, be a decision maker when teams are in deadlock, or sometimes just connect the right people within the organisation. Below are 5 techniques that can be utilised:

    • 1:on:1s: You will often hear about blockers or frustrations in 1:on:1s without even asking. However, if not, always probe just incase your team member is holding back
    • Attend demos: End of sprint demos, or product demos in general are a great way to stay in the loop in a casual environment. If teams are not running demos - encourage them to!
    • Show a genuine interest: Being a leader that is genuinely interested in your teams projects (typically by just asking questions), will craft an environment where members will more openly discuss what is going on in their day-to-day.
    • Follow relevant JIRA boards: Replace "JIRA" with whichever application the team uses. Following status updates on JIRA can provide you great intel on project statuses in an asynchronous way.
    • Follow the documentation: Most teams use a wiki or similar. But following the relevant pages/spaces you will often be able to stay up to date on high level solutions and architectures. If you aren't noticing documentation, this may be a red flag to begin asking questions
    Building a culture of empowering software engineering teams is a rewarding opportunity. It will allow you to scale your team and your role as a leader within the organisation. If you found this post useful, check out my book "Leading software teams with context, not control" on Leanpub as well as other online stores like Amazon in eBook or print format.

    How I wrote my first non-fiction software leadership book

    Posted on Sep 23, 2020

    Around 10 months ago I began work on a new side project to author a book on software engineering leadership. I was confident I went in eyes wide open, knowing it wouldn't be an easy undertaking, but in the end it was considerably harder than I had even imagined. As a retrospective I wrote this post to share the 18 (yes way too many) stages I took when writing my first book "Leading Software Teams with Context, Not Control". In hindsight, I could have easily reduced the amount of stages I went through, however, you always learn considerably the first time you do anything. I'm sure there are many more effective ways to author a book, but as a first time author this is the approach I took (which did evolve as I worked through the project). So yes, you probably shouldn't use this as a definitive guide if starting out on writing your own first book!

    Here is a visualisation of % of book completed against hours invested:

    Where it all began...

    It all started with a relatively innocent Google Doc working a few hours every few days, and ended with a GitHub repo with each chapter in Markdown, a bunch of CSS and an automated pipeline to generate epub, mobi and PDF file outputs for each of the platform I was to publish the book to. Oh, and dedicating 3 hours every night to get this project over the line.

    Part 1 - Defining the chapters

    Completed: 1%

    Time taken: 12 hours

    I spent the first 12 hours defining the structure of the book. This involved breaking it down into 20 chapters focused around aspects I believed were important in relation to software engineering leadership. Within each chapter I listed a small amount of dot points on topics I planned to focus on.

    Lesson learned: Start with a GitHub repository and use Markdown from the very beginning! Also set yourself up on https://leanpub.com/ - it's an amazing self publishing platform that encourages you to publish early and publish often.

    Part 2 - Defining the chapter structure

    Completed: 2%

    Time taken: 6-7 hours

    After analysing a selection of well written books I had previously read, it was important for me to ensure a consistent structure and flow throughout each chapter for the user. I defined a 5 part structure for each chapter to follow. You can think of these as top level headings within the chapter. However, you will notice in my book these headings don't exist, yet each chapter follows that structure :

    • What: An overview of the topic the chapter was about
    • Why: An explain why this topic is important in relation to software engineering leadership
    • Story (optional): An example or story I could share from my own experiences
    • How: Approaches on how to implement this topic within your own software engineering team
    • TL;DR: 3 important takeaways from the chapter
    Lesson learned: After initially spending a few hours on this, I ended up reviewing 6 of the best books I had read and defined a structure that took the best of each.

    Part 3 - Brainstorming each chapter

    Completed: 5%

    Time taken: 40 hours

    At this point in time I began drafting out each chapter as a set of more refined bullet points that aligned to each of the structure headings listed above. Spending around 2 hours on each chapter allowed me to get as many details as I could out of my brain and into the Google doc as quickly as possible.

    Lesson learned: Choose your language up front and stick to it. I started with English (AUS), then moved to English (US), then later converted it to English (British). Wasn't the best use of time...

    Part 4 - Drafting out each chapter

    Completed: 15%

    Time taken: 80 hours

    With each chapter now a set of refined bullet points ranging from 1-4 pages, this part involved bringing each chapter together to represent a roughly formed draft. Each chapter was partially readable and had a level of flow to it. This took around 3-5 hours per chapter. At this point in time I had 18 chapters that were coming together nicely, but had also outlined 12 additional possible chapters as notes down the bottom of the document - which was clearly too many. I had also begun reordering chapters to improve the narrative of the book.


    At around this point in time my wife and I discovered we had a baby due in 7 months. It was at this point I began to realise I needed to finish this project before then - I now had a deadline. But... as it was still a fair way out, I didn't take the deadline as seriously as I should have. However, in a few months time I would.

    Lesson learned: Set yourself a deadline for the project, as well as a deadline for each smaller milestone. Then commit 'X' number of hours per week to dedicate to authoring your book.

    Part 5 - Adding in data points and examples

    Completed: 24%

    Time taken: 80 hours

    It was now the time to bring hard facts, data points and career examples into each chapter to back up the claims I was making. For some chapters this was around reinforcing claims I was making from experiences I had had throughout my career. For others it was crawling through my twitter history that I use to share articles I have found value out of over the years, to find the specific content that helped support my books content. This took around 4 hours per chapter.

    Lesson learned: Correctly collate and document your books references at this point, not at the the end of the project.

    Part 6 - Research break

    Completed: 25%

    Time taken: 2 hours

    At this point as I had the perception I was progressing well. I began to research more tips and tricks around publishing a non-fiction book. One valuable insight I gathered was when writing a book you should start of by draft 4-5 'fake reviews' that you would theoretically like people write about your book once they have read it. This is done in an effort to continually refer back to so you can keep yourself honest in regards to the content you are authoring into your book. You want to write positive and negative reviews. I came up with:
    • This book is a great toolkit of practices I can essentially copy and paste into my role as a software team leader.
    • I found valuable insights into how to setup specific initiatives I should be orchestrating in my role as a software engineering leader of multiple teams.
    • I enjoyed the top 3 summary items at the end of each chapter which helped reinforce key items I need to remember.
    • The exercises included within each chapter have been really easy to adopt into my leadership role.
    • Although I learned different approaches to managing software teams, this book focused more on leading multiple teams, than a single software engineering team.

    A quick break

    At this point I was feeling rather positive. I had written most of the book, and had convincing data to back up my content. All that was left was to do a few rounds of clean up and generate an ebook for publishing. Easy right? At this point I thought I was about 65% through the project, in reality I was about 20% through.

    Part 7 - Clean up stage 1

    Completed: 40%

    Time taken: 125 hours

    Starting from chapter 1, I began working through each chapter to fix up grammar, improve the flow of sentences, include additional detail where needed and deleted content that was just unnecessary. The first 6 chapters within the book were sequential, whereas the subsequent chapters were all independent and could be read in the readers preferred order. I invested a lot of time attempting to make chapters 1-6 to flow seamlessly so as to not confuse the reader. I spent over 16 hours cleaning up just the first 6 chapters, rewriting, and rewriting more. This is when I hit my first wall, when I began to realise I was definitely not 65% through and more like 20% through authoring this book. I now realised I needed a more structured time investment to progress. I began to dedicate 3 hours, 3 nights a week on this project. Chapters 7 onwards were slightly easier to clean up - but still a considerable amount of work.

    Lesson learned: Set up a schedule for time investment earlier on that you can commit to.

    Part 8 - Clean up stage 2

    Completed: 55%

    Time taken: 98 hours

    The book was now at around 55,000 words and my Google doc was beginning to struggle from a performance perspective. But I persisted with it (for the moment). Starting again from chapter 1, I began a second pass on improving grammar, removing duplication and general tidy up. I eventually had to condense my first 6 chapters into 5 chapters as it just wasn't working the way I had originally authored it. This took up a lot of time and was mentally challenging. I also ended up deleting an entire chapter (chapter 18 to be specific) as the book was becoming too lengthy and that chapter added the least value. Again, mentally very hard to delete large blocks of content that I had already sunk many hours into. However,  in the end it was the correct decision. At this point I also made the hard decision to remove all other 'potential chapters' that I still had as notes down the bottom of the book. Maybe this will appear in my second book sometime in the future. I spent around 2 hours on each chapter, and again more time refining chapters 1-6.

    Lesson learned: It is entirely ok to delete large amounts of content within your book if it will lead to be a better experience for your readers - but yes it will be mentally challenging to do so.

    Part 9 - Defining and setting up 'the matrix'

    Completed: 57%

    Time taken: 12 hours

    I was now spending 3 hours each night, 6 days a week authoring this book. I had around 8 weeks to go until the baby deadline would arrive, and I was running very short on time. The few people I was talking about my book with to I would confidently let them know I was tracking well, however, personally I was not so sure. Throughout the previous month, I was furthering my investigation into what makes a great non-fiction book. Again, something in hindsight I should have undertaken at the start of the project. I came to the conclusion that I needed to define 8 measures each chapter should satisfy for the reader. They were:

    • Clearly state the problem the chapter was written to provide a solution for
    • Provide an actual solution the reader could adopt within their own team/role
    • Include two data point facts in each chapter
    • Include 1 example or personal story
    • Relate to the readers experience to connect with them throughout the book
    • Share something important the reader probably wasn't aware of
    • Include a call to action and/or summary at the end of each chapter
    • Make claims that are as strong as they can be made without becoming false

    I put together a simple matrix (eg: table), with those 8 measures above as columns and each chapter as rows so I could track which measures I had accomplished.

    Lessons learned: Spend a week or so researching what makes a great non-fiction book before you begin. As well as this create your own matrix to help keep you honest to your readers in each chapter.

    Part 10 - The matrix phase 1

    Completed: 65%

    Time taken: 69 hours

    With the matrix defined, I worked through each chapter ensuring it tackled each measure, and where it might not be relevant marked it as N/A. In this phase, I skipped over any item that had a potential to require considerable time investment. I spent around 2-4 hours on each chapter.

    Part 11 - The matrix phase 2

    Completed: 70%

    Time taken: 46 hours

    This part involved going back through each chapter and filling in the missing gaps within the matrix. Although some chapters took longer than others, I averaged 2 hours per chapter. At this point in the project, I began to feel as though I had a book that tackled the key problems I was aiming to.

    Part 12 - Crafting the title

    Completed: 75%

    Time taken: 12 hours

    I knew I needed to proof read the book again (at least once or twice more), however, I needed a break. I had hit a writing exhaustion wall. It was now time to come up with a title.. Easy right? Ok probably easier than finding a name for a new startup with an available domain name. But still hard! 

    I wanted something I could relate to, that would catch the interest of a potential reader. After around 12 hours of brainstorming many different titles and variations, as well as gathering feedback from a few individuals within my network, I settled on "Leading software teams with context, not control". Context over control was a chapter in my book, and it is a leadership methodology I have followed for many years - so it seemed quite applicable.

    Lesson learned: Procrastinate on a title until you have one that resonates well with you.

    Part 13 - Authoring surrounding chapters

    Completed: 82%

    Time taken: 17 hours

    With a title in place, I now needed to author the surrounding chapters within the book which included:

    • About the author
    • Why the book was written
    • Acknowledgments
    • Glossary
    • Notes
    Lesson learned: References are still as hard and time consuming to create as they were in the university assignment days.

    Part 14 - The final proof read

    Completed: 90%

    Time taken: 56 hours

    With the book mostly complete (90%), it was time for the final proof read. I spent just over 2 hours reading through each chapter very slowly, in large font to ensure it flowed well for the reader. I also began using grammarly.com, a great tool to improve grammar and sentence structure.

    Lesson learned: Start using the grammarly.com app from the very beginning.

    Part 15 - Creating a cover page

    Completed: 91%

    Time taken: 28 hours

    What can I say, I spent too much time finding the 'right' shade of blue. I used a great online tool called figma.com to design the cover page. I ended up creating 4 cover pages for the different stores I published the book to.

    Lesson learned: Probably a good idea to outsource this to a freelancer.

    Part 16 - Building the epub files

    Completed: 94%

    Time taken: 46 hours

    I had now written the ebook, how hard could it be to create a beautifully formatted ebook? Well first of all if you want any control over your styling you best be using markdown files with CSS. That was 16 hours lost converting my entire google doc to markdown. Then you need a tool to convert markdown to epub. I ended up using pandoc.org - a very handy tool. If you are planning on using pandoc, be sure to read the documentation carefully and add in all the correct CLI arguments to create page numbers, TOC, title page etc... It can't do everything, but it can do enough.

    After spending a many days experimenting with different approaches, I ended up scripting what I was doing into a repeatable process to generate epub files on the go. I then spent a few days crafting CSS to carefully format the epub into a well presented and readable format.

    This was definitely more time consuming that I had expected, but it was a nice change to jump back into the code.

    Lesson learned: Start with markdown, accept that large tables are near impossible to format well in ebooks, and finally script your build process for the different ebook formats and sample formats. I regenerated my ebook over 200 times while testing out formatting.

    Part 17 - Professional copy editing

    Completed: 97%

    Time taken: 38 hours

    Cost: Approximately $1,600

    Every book requires professional copy editing to perfect it for your readers. There are so many options out there, in the end used the reedsy.com marketplace to find a great copy editor.

    Lesson learned: Pre-book in a copy editor, as it took around 4 weeks for me to fit into their schedule once agreeing on terms.

    Part 18 - Submitting the ebook

    Completed: 98%

    Time taken: 7 hours

    It was now time to publish my ebook to the relevant online stores. I started with LeanPub.com, then Amazon.com, Apple books & finally Google play. It took some time setting up relevant accounts, tax information, author bios etc...

    Lesson learned: Upload your epub to Amazon.com, then download the generated mobi file and use that for platforms like LeanPub. Also mobi will require some minor CSS additions for formatting.

    Part 19 - Preparing for print on KDP

    Completed: 100%

    Time taken: 19 hours

    After spending nearly a year authoring my book into an ebook format, why not publish a paperback as well. Amazon KDP (Kindle Direct Publishing) is a great self publishing platform for on-demand printing of paperback books. You need to upload a print ready PDF, with a cover. To do this I converted the epub file into a PDF using 'ebook-convert' which is part of the calibre application. I also needed to inject in additional CSS for print specific styles. Preparing for print is a rather simple process using KFP, the only downside is a 72 hours approval window after each new upload. However, KDP scans your PDF for formatting issues and alerts you to fix before publishing. Once published you can order cheap author copies for your review.

    Lesson learned: You can apply specific CSS to your print version, for example page breaks, table styling etc... You will want to do this once you have finished your book though to avoid any unintended formatting issues if you adjust large amounts of copy.

    Project complete!

    That's it, 10 months, 792 hours later, I had a 23 chapter book with 349 pages, 68,000 words available in eBook and print formats available globally. It was quite a journey, where at multiple points I thought about pulling the pin, but eventually pushed through and completed the book (unlike many other side projects I still have 'in-progress'). All that is left is to promote the book :-)

    What were my final take aways?

    • Authoring a book is harder and more time consuming than you will ever realise. Go in prepared for this realisation.
    • Carve out consistent time out each week, break your project into smaller milestones and set deadlines for each.
    • Research aspects of successful books, and draft a plan before starting. Still evolve it as you go though.
    • You will learn to correctly spell some of those words that you used to always use spell checker for.
    • Automate you ebook and print generation. I probably regenerated my ebook over 200 times, automating the process will be worth your time investent.
    If you are interested in a copy of my book "Leading Software Teams with Context, Not Control" you can purchase on here.

    Balancing technical uplift with product development

    Closely behind the holy wars of programming languages, is the battle of technical uplift versus product feature development. It exists in every software organisation around the world, however, some are able to better balance the two more evenly. Software developers are best placed to understand how to keep their platforms stable, online and scaling with customer growth demands. This results in the software team typically being motivated in focusing on technical uplift initiatives to improve scalability and reducing technical debt. Whereas the product team knows and understands the customer, and are motivated to continuously find new ways to engage and inspire them. This results in the product team focusing on initiatives that drive new capabilities and features that deliver a better customer experience than their competitors. Finding a technique to balance both of these potentially conflicting and competing priorities can be challenging and fuelled with justified passion from both sides. In mature and collaborative organisations though, it promotes healthy and constructive debates on what initiatives add greater value.

    It is important to note that when referring to technical uplift, it is not specifically referring to only technical debt. Although technical uplift sometimes needs to occur due to long standing technical debt, technical debt is the result of specific historical decisions knowingly being made that results in a platform becoming more expensive to maintain or potentially scale. Some technical debt is bad debt, in that the cost of repaying those decisions becomes exponentially higher as time passes. Whereas other debt is more easily repaid at a later point in time. A crucial call out to understand is that new technical debt is accepted and taken on as a team, it isn’t forced onto the team by stakeholders - that is not a software engineering team culture you want to let evolve.

    Why the decision is not binary

    Balancing technical uplift and product feature build rarely needs to be a binary decision. From a software developer’s perspective, they work day in and day out within the platform’s codebase. They understand many of the pain points that result in scalability issues, areas of the code that every software developer avoids working on because the slightest change can cause unknown production issues, or the frustration of repetitive and manual processes that have evolved over time making the team ineffective. By failing to ensure the software team has a voice in prioritising technical uplift can result in three key impacts. Firstly, the organisation is not trusting the most experienced people working on the platform who understand the key technical issues being faced and how to improve them. Secondly, the organisation is not solving fundamental problems within the platform that will enable faster and more effective development and deployment of new product features into production. Lastly, without trusting the team and empowering them to improve the codebase and platform, it will be detrimental to the culture within the team and ultimately the higher performing software developers will disengage and move on.

    From the product perspective, product owners and product managers attempt to deliver the best possible customer experience. They listen to customers, gather feedback and aspire to release a range of new features to continually attract new customers or retain existing ones. They understand that doing this well will lead to an increase in profitability for the organisation. By not prioritising the development of new capabilities, a platform will be irrelevant as it eventually loses customers to competing organisations and platforms. The thought of losing customers is demotivating for anyone working within a product role and can lead to an increased turnover of product owners and managers within the organisation.

    As mentioned above this is not a binary decision, in that an organisation does not have to focus 100% on technical uplift or 100% on product feature build. A healthy balance needs to be found between the two to ensure platform scalability, new product development and retention of team members within the organisation. Even an early stage startup focused entirely on growth needs to dedicate a percentage of time on technical uplift to ensure platforms don’t become too expensive to scale and maintain which is known to have crippled many startups throughout their scale-up phase. Twitter shared an insightful post on their blog many years ago that spoke to serious performance issues in relation to their search capability. It was a result of years of technical debt in their Ruby codebase which made it challenging to add features and improve the reliability of their search engine. However, the organisation trusted and empowered the software engineering team to implement a solution which resulted in the rewrite of their search capability which improved searching speeds on twitter three fold.

    Don’t be binary

    In a previous software leadership role, I remember walking into a team facing considerable technical challenges specific to the scalability of the platforms running in production. This was due to the organisation scaling up incredibly quickly over their recent years which had resulted in unfortunate architectural decisions being made, as well as a continual focus on product growth with little to no priority on technical uplift. The software team was frustrated and they were beginning to see a higher than normal number of resignations. This needed to be resolved. Over the next 24 months the team began focusing their efforts on fundamental technical uplift. Platforms were decommissioned, others were re-architected, all alongside a much needed modernisation of software engineering tools and practices. The team was investing effort into rapidly evolving their platforms to support the organisation’s expansion aspirations. The first 18 months brought a breath of fresh air within the software team as there was now motivation and support to improve. At this point in time, the software teams were focusing on average 80% of their efforts on technical uplift which was the nearly exact opposite 18 months prior. Throughout the next six months though, the team that had once only wanted to improve their technology and platforms began to question their purpose within the organisation. Sure, their platforms were considerably more stable and scalable, they were even able to deploy production changes within hours instead of months, but something was missing. The organisation's product vision was missing and teams started asking questions:
    • Why are we building this product?
    • Who are our actual customers?
    • What is our vision for this product?
    The mindset across the group began showing signs of drifting back to where it was 18 months prior, which leads me to the point. It is unhealthy to have a software team focus 100% of their effort on technical uplift and ignore product feature development. Likewise you cannot expect a software developer to remain engaged if focusing solely on product feature development and disregarding any technical uplift or hygiene. A balance is required, and this balance is different in every team. This balance may also evolve every 6-12 months due to an organisation's ever changing environment. As a software leader, you need to be constantly pulse checking where teams are spending their effort. Your role is to motivate teams with a balance of technical uplift and product feature development and ensure a purpose for both exists, and is known.

    How to find a healthy balance

    To solve the challenge between technical uplift versus product feature development it is important to accept that there needs to be a balance of the two at all times. Rarely if ever, will a team need to focus 100% of their effort on one or the other, nor should you want a team to due to the reasons discussed above. As with most aspects of software leadership there isn’t a single approach that can be taken to achieve this outcome. However, the below approaches act as guidelines that can help ensure a balance between technical uplift and product feature development.

    Clarify and make clear responsibilities

    It is important the software team is responsible and given ownership for two distinct areas. The first being how a feature is solutioned. Software developers are the closest to the code, they understand the impacts different solutioning decisions have on the maintainability and scalability of the codebase. Ensuring software teams own solutioning, it avoids non-technical individuals outside of the team who are less qualified making fundamental technical decisions that could be detrimental to the health of the platform. As part of solutioning, teams should always gather input from relevant stakeholders but this is purely in a consultative capacity. This will also reduce the potential of corners being cut in solutions that normally always results in incurring technical debt.

    The second is owning work effort. The software team undertakes the development, testing and deployment of work items, thus are the only ones qualified to estimate the work effort required. Is it impossible and naive to believe a stakeholder can define the work effort required by the team when they do not understand the low level intricacies of the platform. Stakeholders are in their right and should share constraints that may be related to compliance, marketing deadlines or similar with the team. If a stakeholder attempts to define work effort and commit a team to it, as a software leader you need to clearly set expectations that they do not have the relevant expertise to make this decision and that it is up to the team to own work effort estimation. It can be necessary to communicate this message with senior leaders across the organisation, while keeping individuals honest when noticing it occurring.

    Where a stakeholder may get involved in conversations is around scaling back feature scope to reduce work effort required if there are legitimate time constraints. By allowing the software team to own work effort estimation you are ensuring that non-functionals are taken into account, that technical unknowns are accounted for, and that the team will solution in line with the software team's target state.

    Making technical uplift visible

    If technical uplift is not documented, not prioritised and not made visible, team's will always struggle to make progress in it. It would be like walking up an escalator in the wrong direction, the amount of additional effort required is considerably more. Teams wouldn’t and shouldn’t start work on a new product feature without first understanding the outcomes, the priority, defining relevant user stories and documenting a solution. Why should technical uplift work be any different? Documenting technical uplift involves the team defining their most important initiatives, determining the value, prioritising them relative to each other and then work effort estimating them as a team. Try to focus on the immediate top 10 technical uplift initiatives compared to hundreds to avoid overwhelming not just the team but the wider organisation. With technical uplift documented it can now have an opportunity to be discussed and prioritised alongside other priorities coming into the team. It is no longer a hidden list of vague work items in the minds of people within the team.

    Justifying technical uplift initiatives smarter

    Outside of making technical uplift visible, the other single reason why it can struggle to be prioritised comes down to an inability to justify it in a way that is understood by the organisation. As a software leader you are responsible for being the voice of reason and support when it comes to prioritising technical uplift. Your role needs to understand how the organisation justifies and prioritises initiatives so you can apply the same approach to technical uplift. Below are three common ways organisations prioritise:
    • Return on Investment (ROI):
      • ROI = total estimated value / cost to implement
      • Cost to implement includes all costs associated to develop and maintain including people costs, infrastructure costs, license costs etc..
    • Competitor feature parity.
    • Cost of Delay (CoD) is the cost to the organisation of not implementing the feature by a certain point in time. This may be lost revenue, compliance fines or lost customers due to a competitor moving faster.
    Organisations will quite often use a combination of prioritisation methods as they mature to ensure a more complete justification. By understanding and applying the same justification to technical uplift initiatives it creates a level playing field allowing apples to be compared with apples.

    Let’s explore a simple example using ROI as the only method to prioritise both a technical uplift or product initiative across an organisation. The product initiative will bring $240,000 of profit in the first year, at a total work effort cost of $75,000 (which equates to three software engineers for three months earning $100,000 a year). There is also a yearly cloud infrastructure cost of $12,000. ROI for the first year is calculated by:
    • ROI = total profit / total cost
    • ROI = $240,000 / ($75,000 + $12,000)
    • ROI = 2.75

    This provides an ROI of 2.75.

    The technical uplift initiative will deliver $420,000 of savings by deprecating a legacy platform reducing the need for a small team currently supporting it. The total work effort cost is $100,000 (which equates to 2 software engineers for 6 months earning $100,000). ROI is calculated by:
    • ROI = total profit / total cost
    • ROI = $420,000 / $100,000
    • ROI = 4.2
    This provides an ROI of 4.2.

    Without taking into account any other prioritisation approach it is clear that the technical uplift initiative provides a higher ROI to the organisation and should potentially be prioritised first.

    It is also within your best interest to think carefully about the real costs of managing platforms that technical uplift will simplify and improve. If an initiative is wanting to automate deployment pipelines for a specific platform, it is valuable to understand the costs in maintaining the existing manual deployment processes. You may have four teams spending three hours each week manually releasing code. Over the course of just one year the team would spend $31,200 worth of effort manually releasing. This cost will continue to be incurred every year until the deployment process is automated. The cost of manually deploying also takes precious capacity away from developing new product features. When describing impacts and value in this way, it becomes even easier to rally support from product management as they are selfishly (and rightly so) focused on delivering more product features, faster.

    Finally, it is in your best interest to coach software leaders within your team on these prioritisation approaches. This will result in many of the justification and prioritisation conversations being able to happen directly within the team and avoid the need for your role to become involved in every single one of them.

    Coupling technical uplift with product features

    Sounds simple right? If your organisation is product led, there will be a never ending list of new product initiatives in the backlog. One of your responsibilities of a software leader is to continually look for opportunities and synergies between technical uplift initiatives and new product initiatives. By coupling a new product feature with a relevant technical uplift improvement that resides in the same area of the codebase, it can often reduce effort around testing which creates efficiencies when compared to delivering them both in isolation.

    For example, there may be a product feature to implement an additional payment provider to the customer. Within the technical uplift backlog there could be an initiative to improve the level of unit testing that exists within the platforms payment service. There is a clear relationship between these two items of work. In terms of efficiency, there is a benefit to increase the unit testing coverage within the payment service while adding an additional payment provider. Not only do these two initiatives reside in the same area of the codebase, but the work effort in completing them together is also less than the sum of both their work efforts in isolation. This is a result of the effort required to test the platform's payment implementation can be performed just once, not twice. Secondly, the software developers will already be familiar with the payment section of the codebase thus being more efficient, rather than context switching back at a future point in time.

    Another example may revolve around the need to support demand in customer growth which requires introducing additional servers into the rotation. The software team also has a technical uplift initiative in the backlog to move all cloud infrastructure into code. Taking a short term tactical approach would see the team simply add an additional server manually into the rotation pool. However, this wouldn’t move the software team any closer to their target state of infrastructure as code. An approach could be to implement the additional server as infrastructure as code, but manually add the server into the rotation. This provides the benefit of meeting the product requirement but also aligning a portion of the work to the software teams target state without re-implementing the entire load balancing capability as code. Of course, if it is possible to also move the load balancer to infrastructure as code, it should be considered.

    Identifying synergies between technical uplift and product initiatives has been one of the most successful approaches I have been able to follow to ensure a healthy balance.

    Larger technical uplift sometimes requires a project

    In some instances, a technical uplift initiative is simply too large in work effort to be coupled into a product initiative. A fundamental goal of software teams is to ensure they release small and release often. This implies avoiding weeks or even months of code not being released into production. When a technical uplift initiative requires considerable work effort to see it through to completion, it needs to be treated as a project. This involves defining clear outcomes, measures of success, documenting high level scope usually as user stories, estimating work effort and a justification aligned to the organisation's process.

    Using the technical uplift example of implementing an automated build pipeline, the justification may look like:
    • The current cost to the organisation of not implementing
      • Four teams each undertake one release every week.
      • Each release costs four hours of work effort.
      • As each release is after hours it also incurs three hours Time in Lieu (TIL).
      • This sums up to 1,456 hours a year releasing features.
      • This equates to $72,800.
    • Total cost in work effort to implement is 988 hours
      • This equates to two software engineers for three months.
      • The total work effort costs $49,400.
    • By entirely automating the platforms build pipelines, after the first year there will be a cost saving of $23,400 and then $72,800 for subsequent years.
    That is a very persuasive justification as this initiative pays for itself after just 9 months. Taking it one step further, as the teams are now able to release new product features with zero manual effort it enables smaller feature releases more frequently. This reduces production deployment risks while delivering value add features to customers faster than ever before. I would challenge you to find any product role who wouldn’t support this initiative within the organisation.

    Ensuring a combined organisational roadmap

    In many organisations it can make sense to have a separate technical uplift roadmap and product initiative roadmap. This can support a more efficient planning process within specific teams as these roadmaps are often owned by different leaders. However, at an organisational level when organisational priorities are published, it is imperative to have a single and aligned roadmap that includes both technical uplift and product initiatives prioritised side by side.

    Breaking down larger technical uplift initiatives

    The larger the work effort is for an initiative, the harder it is going to be to justify and prioritise. Organisational leaders usually struggle to support large initiatives as their value won’t be realised for a considerable time into the future. Prioritising a six month work effort product initiative is challenging enough, let alone a six month technical uplift initiative. As a software leader, you need to be continuously looking for ways to break larger initiatives into smaller work items that can be delivered into production sooner.

    For example, take the technical initiative to implement a new organisational wide logging and monitoring platform that has a total work effort of seven months. By breaking it down into smaller work items as seen below, the effort required to justify is greatly decreased, risks are reduced and work items that can run in parallel may become more clear.
    1. One sprint to research and document a solution.
    2. Three sprints to build out the base logging and monitoring platform.
    3. Two sprints to upskill and train all software teams.
    4. Three sprints to integrate logging into the first platform.
    5. One sprint to implement monitoring into the first platform.
    6. ... Repeat steps 4 and 5 for each additional platform.
    Always focus on finding the smallest item of work that can be released into production that still adds value to the organisation.

    Definition of Done

    A Definition of Done (DoD) is essential for every software team to define and proactively adhere to. The purpose of a Definition of Done is to improve the overall quality of the capabilities that the team deploys into production. Quite often a Definition of Done includes specific non-functional requirements (NFRs) that a team values, which may include:
    • Automated testing exists for each feature.
    • Solution and code has been peer reviewed.
    • Documentation has been created or updated.
    • Monitoring has been implemented to ensure observability of all critical paths.
    • Product owner has reviewed and signed off on the new capability.
    A Definition of Done does not reduce existing technical debt, however, it plays a part in reducing new technical debt being created which would otherwise need to be paid back in the future.


    • The decision between technical uplift and product initiatives shouldn’t be a binary decision. An organisation needs to find a healthy balance between the two to ensure platform scalability, new product development and retention of team members.
    • Technical uplift initiatives need to be documented, prioritised and made visible to the wider organisation otherwise the team will always struggle to make progress.
    • Continually look for opportunities to couple technical uplift with new product initiatives. This will encourage efficiencies by reducing testing time and reducing context shifting the software team may face by tackling both initiatives separately.

    This blog post is taken from chapter 7 of my book "Leading software teams with context, not control" - I hope you enjoyed it. If you did, check out my book that has 22 other chapters on leading software teams on Leanpub as well as other online stores like Amazon in eBook or print format.

    I've written a book about leading software engineering teams

    Ten months ago I set myself a one year goal to write a book on leading software engineering teams. This books purpose was to document the practices and initiatives I followed while leading software teams. It was to provide myself with a reference model to refer back to and reduce the cognitive load required within my day to day role. What started off as a casual few hours a week planning and firming up the books chapter, rapidly turned into 3+ hours every night researching, writing, rewriting, deleting and writing some more in an effort to complete this side project before our first baby was born. I didn't exactly make the 9 month deadline, but published v1.0.0 three days later. The book is titled "Leading Software Teams with Context, Not Control" and can be purchased from Leanpub as well as other online stores like Amazon in eBook or print format.

    Why this book was written

    As a software engineering leader, the scope of your role is extensive. You have many competing responsibilities and priorities that need to be balanced to ensure you and your team are as effective as possible. These can include providing architectural direction, driving peer to peer collaboration, ensuring cross-team alignment, motivating teams with purpose, supporting team members' career progression, or perhaps helping remove blockers and impediments. All of these efforts work to create a specific culture within a software team that aims to improve effectiveness, engagement, and retention.

    I wrote this book for software leaders who are responsible for leading teams. More specifically it focuses on approaches for leading multiple software teams whether that is directly or indirectly through leadership roles reporting into your role. There is a level of unique complexity that comes with leading, aligning and supporting multiple software development teams. This book aspires to provide you with helpful and reusable approaches that can be leveraged to bring about a greater level of efficiency into your role as a leader. There are many books written around leading teams or leading people, this book takes a lens of what specific practices and initiatives you should be investing your time into when leading technical software teams.

    Regardless of the size of your software team, if you find yourself needing to better balance both the technical and people aspects of leading teams, or guidance on initiatives you could be running to improve team alignment, effectiveness and engagement then this book is written for you.

    What's in the book?

    The book comprises of 23 chapters that discuss a broad range of initiatives you can run when leading software engineering teams. These range from baselining a software team to effective software engineering metrics to crafting an experimentation culture. The book is broken into 3 parts:
    • Part 1: Creating alignment
    • Part 2: Leading teams
    • Part 3: Uplifting team culture
    Each chapter has a loose structure of explaining the topic, talking to why it is important within a software engineering team and different approaches you can use to implement within your own team. Most chapters include multiple exercises that you can adopt into your organisation, as well as the occasional story around specific experiences I have had while leading teams in my previous and current roles.

    Interested in more insight to exactly what is in each chapter just incase it tempts you to pick yourself up a copy? Here it is...

    Part 1: Creating alignment

    1. Baselining a software team
      All the thing you need to do to understand the current state of your software engineering team. It talks to technical and team cultural measures and techniques to determine a teams baseline.
    2. Defining a software team target state
      A software target state is the technical and non-technical aspirations of the team that are flags on the top of the hill for you and your team to continuously climb towards. Learn how to define a software target state for your team.
    3. The software engineering roadmap
      A software engineering roadmap is a visual representation that defines a team's pathway to achieving their target state. This chapter explains the importance and how to implement one.
    4. Effective software team metrics
      Metrics within software are measurements that are put in place to keep you and your teams honest, accountable and continuously improving. Learn about what makes good team metrics, as well as metrics you need to avoid.
    5. Importance of collaborating on team goals
      Setting goals for your team or team members does not need to be an overly time consuming exercise, although it does need to align to your team target state, roadmap and team metrics.
    6. Balancing reactive versus strategic work
      Getting stuck in the weeds is all too common for software engineering teams, learn about strategies to better balance the time you spend on strategic based work items.
    7. Balancing technical uplift with product development
      The decision between technical uplift and product initiatives shouldn’t be a binary decision. Discover approaches to find a healthy balance between the two to ensure platform scalability, new product development and retention of team members.
    8. Introducing a new technology
      As a software leader, you are accountable for ensuring relevant new technology is being adopted within your team at a healthy and manageable pace. This is compared to implementing too many technologies too quickly and running the risk of losing great software developers from technology change fatigue or cognitive overload due to overly complex platforms.
    9. Platform SLAs
      Discover SLAs that add value to a platform, as well as important items to consider when implementing SLAs for your software teams platforms.

    Part 2: Leading teams

    1. Effective 1:on:1s
      1:on:1s are weekly or fortnightly catch-ups with each of your direct reports, that provide an opportunity for you to listen, provide guidance, coach, listen more and support them within their role and future career aspirations. Find out about approaches to make the most out of 1:on:1s within your team.
    2. Continuous performance feedback
      Performance feedback within software teams should more than a once yearly exercise that is orchestrated through the organisation's HR department. There are many opportunities to provide constructive feedback to your team every day of the week.
    3. Impactful position descriptions
      Position descriptions are short (no more than 3 pages), well formed documents that clearly articulate the impact a role has, where it sits within the organisation and breaks down the key responsibility areas of that role. Discover how to craft position descriptions that create a sense of excitement and motivation within a role that is genuinely valued within the organisation.
    4. Candidate centric interviews
      Understand what candidate centric interviews are and how they build trust between the interviewer and the candidate which results in them being more genuine about their experiences, concerns in the role and their career aspirations. While at the same time becoming more invested in the role within the team.
    5. Onboarding effectively
      Effective onboarding should include a combination of discussions, introductions, workshops, documentation sharing and mentoring to support new starters in becoming a motivated and effective team member. This chapter explains the different phases of onboarding to focus on within your software teams.
    6. Software team structures
      Learn how to implement ‘just enough hierarchy’ while coupling it with small team sizes of seven or less, to dramatically reduce the blast radius and impact when an individual chooses to move on from the organisation. 
    7. Career pathway framework for software teams
      A software career pathway framework links together roles to represent different pathways of progression an individual can follow to advance their career within the team that aligns with their skills, experience and motivation. It is not a trivial task, however this chapter aims to provide some key learnings to fast track your own implementation.

    Part 3: Uplifting team culture

    1. Context over control
      It is in the title of this book, and this chapter explains how a context over control approach allows software leaders to lead vastly larger teams and projects when compared to a micromanagement approach.
    2. Engaging team meetings
      As a software leader you are accountable for ensuring weekly team meetings are set up, are engaging, have the right amount of energy and bring value to as many individuals within that session as possible.
    3. Team health checks
      Team health checks usually consist of six to ten questions that focus on technical, team and communication practices that each team discusses and rates every six to eight weeks. Understand what makes health checks valuable and approaches on running health checks within your teams.
    4. Building a culture of learning
      A software team that is built around a culture of learning allows its members to learn in all aspects of their role. Learn about how building a culture of learning can be achieved at no financial cost to the organisation.
    5. Crafting an experimentation culture
      Software teams that embrace an experimentation culture have a more maintainable technology stack, incur less technical debt, and are thus able to iterate faster on developing and releasing new capabilities. Discover approaches to encourage experimentation in every aspect of your teams day to day.
    6. Software engineering working groups
      Software engineering working groups involve a set of individuals working together to build a center of excellence around a specific topic. This chapter explains different approaches of implementing working groups within your organisation.
    7. Running a software team hack day
      Running a hack day allows your teams to take a break from the day to day and collaborate together on solving real software engineering problems being faced. Learn about a high level structure and run sheet you can use to run your very own hack day.
    Although this goal can now be ticked off my list, I'm looking forward to many future iterations as I evolve my ways of running software engineering teams. If you would like to check out a free sample, you can download it over at Leanpub. eBook formats are available on Leanpub, Amazon, Google Play and iBooks. Print copies can also be purchased from Amazon.

    How to bulk reset your Reddit upvote history

    Ever needed to bulk reset your upvote history for a subreddit? Yes, it is a bit of a random topic. However, I found myself in this situation recently and there wasn't a way to do so using Reddit's built in capabilities. A simple piece of Javascript came to the rescue!

    Disclaimer: This solution is coupled to the markup of the current Reddit website. So will only be functional until Reddit changes their markup in the future.
    Start by navigating your browser to:
    Then open up your developer console (F12). Click on the console tab.

    The javascript code

     * Function to clear votes for a specific subreddit
    let clearVote = (subReddit) => {
        let clearedCounter = 0;
        let anchorsFound = document.querySelectorAll("a[href$='/r/" + subReddit + "/']");
        for (var i = 0, l = anchorsFound.length; i < l; i++) { let anchor = anchorsFound[i]; let post = anchor.closest('.Post'); let button = post.querySelector("button[data-click-id='upvote']"); button.click(); clearedCounter++;     }     console.log('Cleared ' + clearedCounter + ' upvotes from ' + subReddit);
    }; // We need to continually keep scrolling to the bottom to load more history let scrollToBottomCounter = 0; let interval = setInterval(() => {
        if( scrollToBottomCounter > 300 ) {
    }, 1500);
    // Call clear vote for the subreddits you want to clear your upvotes for
    // EG: this is the name of the subreddit that you can see within the UI that looks like r/CryptoMarkets/
    // However you simply include the subreddit name and not the starting 'r/' and trailing '/'
    // You can call clearVote as many times as you require.

    How to run the script

    Now, after adding a 'clearVote' function call for each subreddit you want to clear upvotes for you can paste this script into the 'console' within your browser and the script will simulate a user clicking the upvote button for all of your upvote history.
    Enjoy :-)

    NeuroEvolution using TensorFlow JS - Part 3


    This tutorial is part 3, if you have not completed NeuroEvolution using TensorFlowJS - Part 1 and NeuroEvolution using TensorFlowJS - Part 2, I highly recommend you do that first. As it will:
    • Explain how to setup the codebase
    • Teach you how to code a basic NeuroEvolution implementation
    • Improve the performance of the NeuroEvolution
    • Implement the ability to save/load models
    In this last tutorial we will:
    • Learn how to use simple shape/object/color detection to gather the inputs required to feed into TensorFlow JS (instead of interacting directly with the GameAPI object)
    This tutorial is also based on the source code found on my GitHub account here https://github.com/dionbeetson/neuroevolution-experiment.

    Let's get started!

    Create base class

    We need to create a class that will be used instead of GameAPI to gather all of the required inputs from the game every 10ms and then pass them to TensorFlow JS to make the same prediction to jump or not.

    • There is already logic in js/ai/Ai.js to initialise this class if you select the checkbox 'Use Object Recognition' in the UI
    • There are a few inputs small items we are not decoupling from the original GameAPI (eg: getScore()), as it was outside the scope of my experiment. But it could be implemented if we really wanted to.
    How it works

    1. Every 10ms the game will execute a method to 
      1. Extract an image from the game canvas
      2. Convert the image to greyscale
      3. Using a 10x10 pixel grid create a layout of the level in regards to:
        1. What is whitespace (color=white)
        2. What are obstacles to jump over (color=grey)
        3. Where the player is (color=black)
      4. Look ahead 4 blocks (40 pixels) and determine if there is a block or dip to jump over
      5. Determine the x/y coordinates that can feed into TensorFlowJS
      6. Ai.js will then use the same logic as the previous tutorials to determine to jump or not

    Create a file called js/ai/GameImageRecognition.js and then paste in the below code.


    class GameImageRecognition {
    Lets now create the start() method which is called by default in Ai.js which will start the game, setup the canvas tracker (which is essentially a new hidden canvas DOM element that we paint an image of the game canvas into every 10ms) and use it for detecting shapes/objects/player position etc...


    start() {
      const self = this;
      this.#enableVision = document.querySelector("#ml-enable-vision").checked;
      // @todo - remove dependency on gameAPI object - although outside of scope of this example
      // Simulate what happens in the game
      setTimeout(() => {
        self.#isSetup = true;
      }, 100);
    We will also add in a few other helper functions to get things moving.


      this.#visualTrackingCanvas = document.createElement("canvas");
      this.#visualTrackingCanvas.setAttribute("width", this.#gameApi.getWidth());
      this.#visualTrackingCanvas.setAttribute("height", this.#gameApi.getHeight());
      this.#visualTrackingCanvas.setAttribute("class", "snapshot-canvas");
      this.#gameApiCanvas = this.#gameApi.getCanvas();
    setHighlightSectionAhead(index) {
      // Not required for this demo
    isOver() {
      return this.#gameApi.isOver();
    isSetup() {
      return this.#isSetup;
    As well as some required class variables


    #gameApi = new GameApi();
    #isSetup = false;
    #enableVision = false;
    Now wire up the UI event handler


    document.querySelector("#ml-use-object-recognition").addEventListener("change", function() {
      if ( this.checked ) {
        neuroEvolution.useImageRecognition = true;
      } else {
        neuroEvolution.useImageRecognition = false;
    And add in the required setters for useImageRecognition


    set useImageRecognition( useImageRecognition ) {
      this.#useImageRecognition = useImageRecognition;
    Reload your browser (, check 'Use Object Recognition' and click 'Start evolution' button. You should see the games begin, but get a lot of game.gameApi.getPlayerY is not a function errors. This is because we need to implement a range of functions to gather input.
    Before we do that though, we will add in the logic to extract information from the game canvas every 10ms.


    // Method to extract data from canvas/image and convert it into a readable format for this class to use
      let data = this.#gameApiCanvas.getContext('2d').getImageData(0, 0, this.#visualTrackingCanvas.width, this.#visualTrackingCanvas.height);
      let dataGrey = this.convertImageToGreyScale(data);
      this.#visualTrackingMap = this.generateVisualTrackingMap(dataGrey, this.#visualTrackingCanvas.width, this.#visualTrackingCanvas.height, this.#visualTrackingMapSize, this.#colors);
      this.updatePlayerPositionFromVisualTrackingMap(this.#visualTrackingMap, this.#colors);
      this.#sectionAhead = this.getSectionAhead(this.#playerX, this.#playerY, 4, this.#visualTrackingMapSize, this.#playerGroundY);
    // Method to create an object indexed by xposition and yposition with the color as the value, eg: 10x40 = grey
    generateVisualTrackingMap(data, width, height, visualTrackingMapSize, colors) {
      let visualTrackingMap = {};
      for( let y = 0; y < height; y+=visualTrackingMapSize ) {
        for( let x = 0; x < width; x+=visualTrackingMapSize ) {
          let col = this.getRGBAFromImageByXY(data, x+5, y+5)
          let key = x+'x'+y;
          visualTrackingMap[key] = colors.background;
          if ( 0 == col[0] ) {
            visualTrackingMap[key] = colors.player;
          if ( col[0] > 210 && col[0] < 240 ) {
            visualTrackingMap[key] = colors.block;
      return visualTrackingMap;
    These above functions have extra dependencies. Let's add in functionality to convert an image into greyscale, as well as get the RGBA from a specific pixel on that image.


    convertImageToGreyScale(image) {
      let greyImage = new ImageData(image.width, image.height);
      const channels = image.data.length / 4;
      for( let i=0; i < channels; i++ ){
        let i4 = i*4;
        let r = image.data[i4 + 0];
        let g = image.data[i4 + 1];
        let b = image.data[i4 + 2];
        greyImage.data[i4 + 0] = Math.round(0.21*r + 0.72*g + 0.07*b);
        greyImage.data[i4 + 1] = g;
        greyImage.data[i4 + 2] = b;
        greyImage.data[i4 + 3] = 255;
      return greyImage;
    getRGBAFromImageByXY(imageData, x, y) {
      let rowStart = y * imageData.width * 4;
      let pixelIndex = rowStart + x * 4;
      return [
    Add these class variables as well


    #visualTrackingMap = {};
    #visualTrackingMapSize = 10;
    #sectionAhead = [];
    #playerX = 0;
    #playerY = 0;
    #playerGroundY = 0;
    #colors = {
      block: 'grey',
      visionOutline: 'red',
      player: 'black',
      background: 'white'
    Now we want to add in 3 methods that will be called from Ai.js to detect some of the inputs from the previous tutorials.


    getHeight() {
      return this.#visualTrackingCanvas.height;
    getWidth() {
      return this.#visualTrackingCanvas.width;
    getPlayerY() {
      return this.#playerY;
    Reload your browser (, check 'Use Object Recognition' and click 'Start evolution' button. Again, you should see the games begin, but now get a lot of game.gameApi.getPlayerX is not a function errors. Ok, we are making progress, let's implement this method.
    This method is actually the method we hook into to do all of the processing of the games canvas. Realistically we could have pulled this out into it's own setInterval(), but for the purpose of this demo let's couple it in with getPlayerX() which is called within every think() invocation.


    getPlayerX() {
      return this.#playerX;
    Now add in a method to determine the players x/y position on the canvas (we do this by finding the 10x10 pixel that is color #000000 (black)). Simple, yet effective.


    updatePlayerPositionFromVisualTrackingMap(visualTrackingMap, colors) {
      for (const xy in visualTrackingMap) {
        let value = visualTrackingMap[xy];
        if ( colors.player == value) {
          let position = xy.split('x');
          this.#playerX = parseInt(position[0]);
          this.#playerY = parseInt(position[1]);
          // If we dont have a ground, then set it
          if( 0 == this.#playerGroundY ) {
            this.#playerGroundY = this.#playerY;
    Next up is a lot of logic to look through visualTrackingMap which stores all the colors of each 10x10 pixel section and determine what lies ahead in relation to the player.


    getSectionAhead(playerX, playerY, aheadIndex, pixelMapSize, playerGroundY){
      let x;
      let y;
      let section;
      let aheadWidth = aheadIndex*10;
      x = Math.ceil(playerX/pixelMapSize) * pixelMapSize + aheadWidth;
      y = Math.ceil(playerY/pixelMapSize) * pixelMapSize;
      section = this.getCollisionSectionAhead(x, y);
      if( false == section ) {
        section = [x, playerGroundY+pixelMapSize, pixelMapSize, pixelMapSize];
      return {
        x: section[0],
        y: section[1],
        width: section[2],
        height: section[3],
    // Logic to get the xy and width/height of the section ahead that we need to use to determine if we jump over or not
    getCollisionSectionAhead(x, y) {
      // Look for drop/dip section ahead we need to jump over
      y = this.#playerGroundY;
      if ( this.isSectionSolid(x, y) ) {
        // Look for taller section ahead we need to jump over
        let xyStart = this.findTopLeftBoundsOfSolidSection(x, y-this.#visualTrackingMapSize);
        let xyEnd = this.findTopRightBoundsOfSolidSection(xyStart[0], xyStart[1], 1);
        return [xyStart[0], xyStart[1], xyEnd[0] - x, y - xyEnd[1] + this.#visualTrackingMapSize];
      } else {
        if (  false === this.isSectionSolid(x, y+this.#visualTrackingMapSize) ) {
          let xyStart = this.findBottomLeftBoundsOfSolidSection(x, y);
          let xyEnd = this.findBottomRightBoundsOfSolidSection(xyStart[0], xyStart[1], 1);
          return [xyStart[0], xyEnd[1]+this.#visualTrackingMapSize, xyEnd[0] - x, this.#visualTrackingMapSize];
      return false;
    isSectionSolid(x, y){
      let section = this.#visualTrackingMap[x + 'x'  +y];
      if ( this.#colors.block == section ) {
        return true;
      return false;
    findTopLeftBoundsOfSolidSection(x, y) {
      if ( this.isSectionSolid(x, y) ) {
        return this.findTopLeftBoundsOfSolidSection(x, y-this.#visualTrackingMapSize)
      return [x,y+this.#visualTrackingMapSize];
    findTopRightBoundsOfSolidSection(x, y, counter) {
      if ( counter < 5 && this.isSectionSolid(x, y) ) {
        return this.findTopRightBoundsOfSolidSection(x+this.#visualTrackingMapSize, y, counter)
      return [x,y];
    findBottomLeftBoundsOfSolidSection(x, y) {
      if ( false === this.isSectionSolid(x, y) && y < this.#visualTrackingCanvas.height) {
        return this.findBottomLeftBoundsOfSolidSection(x, y+this.#visualTrackingMapSize)
      return [x,y-this.#visualTrackingMapSize];
    findBottomRightBoundsOfSolidSection(x, y, counter) {
      if ( counter < 5 && false === this.isSectionSolid(x, y) ) {
        return this.findBottomRightBoundsOfSolidSection(x+this.#visualTrackingMapSize, y, counter)
      return [x,y];
    getSectionFromPlayer(index) {
      return {
        x: this.#sectionAhead.x,
        y: this.#sectionAhead.y,
        width: this.#visualTrackingMapSize,
        height: this.#playerY-this.#sectionAhead.y
    I will be the first to admit the above logic is not clean/performant and can really be improved. But the purpose of this demo was to prove what is possible - feel free to submit a PR if you want to improve :-)
    Getting closer... Reload your browser (, check 'Use Object Recognition' and click 'Start evolution' button. You should now see a lot of game.gameApi.isPlayerJumping is not a function errors. Let's implement that and a few other methods that are needed regarding the player.


    isPlayerJumping() {
      if( this.#playerY < this.#playerGroundY ) {
        return true;
      return false;
    getPlayerVelocity() {
      return 0;
    canPlayerJump() {
      if( this.isPlayerJumping() ) {
        return false;
      return true;
    Reload your browser (, check 'Use Object Recognition' and click 'Start evolution' button. You should now see a lot of game.gameApi.setDebugPoints is not a function errors. Let's implement.


    setDebugPoints(debugPoints) {
    We are actually missing a key method jump(). For the sake of this demo, we are just going to revert to calling the GameAPI. We could simulate this with a bit of trickery by focusing in on the canvas and triggering the 'spacebar' button. But a little too much for this demo.


      // The only way to simulate this is by pressing the spacebar key, but because we have multiple games at once it isn't easily possible.
    Reload your browser (, check 'Use Object Recognition' and click 'Start evolution' button. You should now see the game mostly work, although a few errors will pop up. Add in the below.


    getProgress() {
      return this.#gameApi.getProgress();
    getScore() {
      return this.#gameApi.getScore();
    isLevelPassed() {
      return this.#gameApi.isLevelPassed();
    remove() {
      if( null !== this.#visualTrackingCanvas.parentNode ) {
    show() {
      if( null !== this.#visualTrackingCanvas.parentNode ) {
    Reload your browser (, check 'Use Object Recognition' and click 'Start evolution' button. Everything should work now, if you let it go it will eventually solve all of the levels.
    However... Wouldn't it be nice to see what the ML is actually seeing on each game? Let's add in some debugging info


    drawRectOnCanvas(rect, color) {
      let context = this.#visualTrackingCanvas.getContext('2d');
      context.strokeStyle = color;
      context.lineWidth = "1";
      context.rect(rect.x, rect.y, rect.width, rect.height);
    // Function responsible for drawing what the computer sees, we then use this to get the inputs for tensorflow
    drawMachineVision() {
      if( this.#enableVision ) {
        // Clear everything first
        this.#visualTrackingCanvas.getContext('2d').clearRect(0, 0, this.#visualTrackingCanvas.width, this.#visualTrackingCanvas.height);
        // Draw player
          x: this.#playerX,
          y: this.#playerY,
          width: this.#visualTrackingMapSize,
          height: this.#visualTrackingMapSize,
        }, this.#colors.visionOutline);
        // Draw map sections
        for (const xy in this.#visualTrackingMap) {
          let value = this.#visualTrackingMap[xy];
          if ( this.#colors.block == value) {
            let position = xy.split('x');
              x: parseInt(position[0]),
              y: parseInt(position[1]),
              width: this.#visualTrackingMapSize,
              height: this.#visualTrackingMapSize
            }, this.#colors.visionOutline)
          x: this.#sectionAhead.x,
          y: this.#sectionAhead.y,
          width: this.#sectionAhead.width,
          height: this.#sectionAhead.height,
        }, 'blue');
    Then change the method getPlayerX() to look like this.


    getPlayerX() {
      return this.#playerX;
    Reload your browser (, check 'Use Object Recognition', check 'Enable ML vision' and click 'Start evolution' button. You should now see lots of red boxes that highlight what the ML is actually using as inputs. You're browser will most likely struggle, however it will work.

    I get consistent results along the lines of:
    • Level 1: Takes 10-25 generations
    • Level 2: Takes 15-40 generations
    • Level 3: Takes 40-400 generations (as it has to learn to jump blocks and gaps).


    I hope this tutorial was useful for you to learn how to use TensorFlowJS to build a NeuroEvolution implementation. Any questions leave them in the comments below, or tweet me on twitter at @dionbeetson

    Source code

    All source code for this tutorial can be found here https://github.com/dionbeetson/neuroevolution-experiment.