Business

“The question I ask every time before writing a user story”

The user story is probably the most common way to capture requirements (at least it is the de-facto standard). It has its good and bad sides, one of them is definitely that you should always know what question will be answered by writing a user story. If the answer only includes “requirement” then you are wasting your time.

The question I ask every time before writing a user story is “What is the minimal marketable feature” ?

I can’t remember if it was Robert C. Martin who coined the term but the idea has been around for a long time now. The idea is to define what your code must do to be valuable, not what features are cool or perhaps novel. As I’m fond of saying : adding necessary technical debt because you want to do something cool is about as wise as borrowing money from a loan shark because you want to party hard on payday! 

A common situation where developers go wrong is when they fill their backlog with too many features and don’t prioritize them correctly. If all user stories in agile affect performance then you should take steps before doing so (for by splitting each story in parts that are not performance-related) or by measuring the performance for each feature you have.

This is part one of a two article series, this first article will cover all the problems that are usually encountered when trying to improve software performance. In particular we’ll look at common mistakes and how to avoid them. Part two will focus on the actual techniques and procedures that can be used to spot and fix performance issues: both those that can be easily found (and fixed) and those that require more involved investigation and analysis . As such it will be far longer than part one; whereas this section should take you about an hour to read fully, part two might take days!

But before we get started – let’s talk about what is meant by ‘performance’ and when is the sprint backlog created .Clearly it’s the speed at which something happens (measured against the time taken to cause it) – but what does that mean in practice? There are three main areas where performance problems occur, each with its own set of issues, techniques and tools:

Processing time – how long a process takes to complete. Examples include running a program, rendering a scene or generating an animation. This is usually measured in seconds or milliseconds. Memory allocation – how much memory is required for a specific task. For example a high-resolution texture requires more space on your graphics card than a low-resolution version of equivalent image quality. This is usually measured in megabytes or gigabytes of RAM . Data transfer rates – this includes everything from making queries into databases, reading files from disk or sending data over the internet. It also includes loading assets into memory for rendering or playing an animation. This is usually measured in megabytes per second .

Processing time, memory allocation and data transfer rates are often interrelated – for example increasing the resolution of a texture means it will take longer to load from disk and use more RAM, but it can mean fewer pixels need to be processed each frame so less time is taken to render the scene. In this case increasing one parameter may increase or decrease another parameter depending on both of their values.

In this article I’ll discuss three ways you can measure performance: using the Unity Profiler window, writing your own code to profile performance, and using third-party tools.

Unity Profiler Window:

The simplest way to measure performance is by looking at Unity’s Performance Statistics window, which can be found under the “Window” dropdown menu of the main Unity editor. This window shows statistics about how long it took to render each frame, where CPU time was spent on scripts and rendering, etc., over the last few frames (you can change this length in Edit -> Preferences -> Stats ). You can quickly find any areas that might need improvement through experimenting with your game and glancing at these statistics as you go.  Here is a snapshot from a simple scene:

Code to profile performance:

The most important part of the above picture is the “Scripts” graph.  You can see that on each frame, a little more than half of the CPU time was spent in script code.

In general, these numbers are pretty close to what you should expect from a scene running at 60 FPS on a modern CPU (about 0.016 milliseconds per frame).  This is where profiling gets tricky, because it depends greatly on your own game and what kind of scripts you have written. The following recommendations assume that your performance exceeds this threshold or else it will be very hard to notice an improvement by optimizing for example rendering costs.

Third-party tools:

The Unity profiler is very easy to use, has a good overview of performance issues and provides useful statistics.  However, it’s not able to tell you exactly where the problem comes from (e.g. which scripts are causing jitter), nor does it give you information about other aspects of your application like rendering or physics stability.

Mono Performance Metrics is a free profiling tool that provides this kind of detail in an effective way by using custom user scripts to mark important events during execution.  It also supports iOS and Android devices, but due to platform-specific issues only works with Unity Pro.