Threat modelling is hard. There are a few reasons why this is so. One of the challenges is the fact that it’s widely accepted that in order to achieve effective threat modelling, you must ‘think like an attacker’. This can be a drawback because the people attempting to think like an attacker are often developers and engineers. In order to think like an attacker, you require a skillset which, as a developer, is usually acquired through osmosis in an attempt to have ‘the security mindset’, which makes it difficult for engineers to truly think like an attacker. Another issue you may run into is creator blindness. As the developers in question often have a key role in the features of the system, when trying to ‘think like an attacker’, it is hard for them to change perspective. This is why authors pay for other individuals to proofread their work, and why we separate development and testing roles in software projects.
Threat modelling is also a task where errors and omissions carry with them extreme consequences further down the line which is why it’s important to mitigate the issues listed above among other issues that exist that haven’t been mentioned in this blog. One possible way of mitigating the issues related to developers threat modelling would be to not let developer threat model the system and delegate the job to security specialists. The issue with this is, developers have something that security specialists don’t, knowledge of the system. In order for the security specialists to reach the same level of understanding as the developers it would take a lot of time and effort for the developers to hand over the information, which would slow down progress on feature development. On the flip-side, you can train developers to be at the level of the security specialists when it comes to threat modelling. This, however, has not proven effective so far.
In December 2012, Adam Shostack—a member of Microsoft’s security development lifecycle strategy team—published a white paper for the ‘Elevation of Privileges’ card game. Inspired by ‘Protection Poker’ —which was published earlier by Laurie William, and can be seen here—this is another example of the serious games movement which brings simplicity and fun into serious tasks such as threat modelling. Serious games are a serious (no pun intended) field of work which applies to a wide range of industries and applications. Their purpose is to make difficult and important tasks more fun. However, they are not intended to be played primarily for amusement.
How to play
First of all you must have a diagram of the system you intend to model the threats on. The architecture of the system will more than likely already exist and be written up somewhere, so this shouldn’t be difficult to get hold of. The idea of the game is to work around any bias on the part of the developers, and ensure that every system is checked consistently. This is why you use a predefined set of cards which can be found here.
You have cards in six different suits for each type of threat for example: spoofing, tampering, repudiation, etc. Each card has information about a threat on them. For instance, the Queen of Repudiation says “An attacker can say ‘I didn’t do that’ and there’s no way you can prove it”. In order to play the game, you connect that threat to the diagram (if possible) so for example, if there are no logs that show the transactions that take place, then the player who played the Queen of Repudiation card would get a point (because the card was successfully linked to an area in the diagram where the threat could perform). You are supposed to go around each player and play in suit if possible: if the Queen of Repudiation is played, the next player must also play a card from the Repudiation suit if they have one in their hand. Whichever player manages to link the highest value card into the system diagram wins the hand and gets a bonus point in doing so (unless someone plays an elevation of privilege card which trumps the hand and wins). The queen of repudiation card can be seen below. Once you’ve finished the game, you count up points of each player and declare a winner. You then create tickets for all the issues you have found in the system (which is the main aim of the game) so they can be tracked in a management software like Jira or Trello.
If you have a team that works remotely (and who doesn’t these days), you can play an online version over a conference call here.
Why this works
By making a serious task that holds impactful consequences into a game, it allows the task to be achieved much better.
Adam Shostack spoke at a black hat conference about hearing in the past that a threat had not been mitigated because the manager of a feature was defensive over it, and the developers didn’t feel in a position to challenge them. Turning the process into a game allows a level playing field and allows employees to challenge upwards when playing the game without having to worry about their manager’s perception of them.
The game also requires participation from every developer or individual who is dealt a hand which generates discussion from all players involved. The threats also act as hints, which encourages players to bounce ideas off of each other and get instant feedback.
As discussed earlier, the game also takes away the pressure of the consequences of the task and abstracts away from it while keeping the aim of the task relevant. At the end of the game, you have a real threat model and a set of tickets to manage and mitigate.
There’s also the fact that some people typically feel like switching off while other people talk during the standard, lengthy, threat modelling discussions and meetings. By gamifying the process it helps the developers engage, makes a dull task something to look forward to doing, and thereby captures input from a wider cross-section of the team.
The game can also be played in breakout groups then the threat models can be cross-referenced to check for consistency and a wider discussion can be had at the end.