The main reason why an organization creates a threat model is to remain aware about its overall security risk exposure; due to mitigation techniques, the attack surface can be further reduced. Since the threats are prioritized, this helps them to maintain the security investment within the planned budget. Not last, a threat model represents an instrument for measuring the level of security achieved by the application or Information System. On the other hand, since threat modeling is included into Secure Development Lifecycle, this means the built model can be updated in time. This approach allows the organization to remain connected to security needs and latest vulnerability assessments.
The URL for original article:
The reviewed article belongs to Ware Bryan and is called “Beyond Machine Learning: Using Models in AI for Security”. In summary, the author highlights two antagonistic approaches adopted in case of security analytics based on AI techniques: 1) usage of human analysts for reviewing the preliminary results outputted after running various AI algorithms (especially for threat detection) and 2) application of a domain model based on probabilities prior to human checking. In general, both techniques are efficient for detection of anomalies (abnormal behavior). However, the application of the model before performing the “heavy lifting” seems to facilitate focusing on most relevant findings only. A secondary idea of the article is that ML and AI techniques cannot distinguish between “good” and “bad” in terms of cybersecurity; this is the main reason why an additional layer of preparation (modeling) must be introduced before formulating final decisions taken by human experts.