Crowdsource comments on a talk

On July 18, I’m giving a keynote talk in Las Vegas at Worldcomp 2011 (the World Congress in Computer Science, Computer Engineering, and Applied Computing). I’ve enclosed the abstract of my presentation, below.  The talk will be in the Lance Burton Theater at the Monte Carlo Resort and Casino. I’m told that the audience is likely to be around 1000 people, so there won’t be much opportunity for comments from the audience.

I have most of the talk prepared, but I thought I would ask, ahead of time, if anyone has some thoughts on the topic/abstract that I should consider before I finish my preparations. I can’t share the talk ahead of my presentation — sorry. I may not be able to respond to every email, but I’ll try. Any and all comments will be appreciated.

If you have any comments or ideas you think I should consider, please share them with me by email.

My talk is partly informed by things I’ve written about in my CERIAS blog over the last 3 years, and by a JASON report, The Science of Cyber Security, from November 2010. (Many people hailed that Jason report, but I think they missed the mark in several places.) Of course, I also am applying 30 years in computer research and applied computing, but I don’t have a specific link for that!


The Nature of Cyber Security

Abstract—There is an on-going discussion about establishing a scientific basis for cyber security. Efforts to date have often been ad hoc and conducted without any apparent insight into deeper formalisms. The result has been repeated system failures, and a steady progression of new attacks and compromises.

A solution, then, would seem to be to identify underlying scientific principles of cyber security, articulate them, and then employ them in the design and construction of future systems. This is at the core of several recent government programs and initiatives.

But the question that has not been asked is if “cyber security” is really the correct abstraction for analysis. There are some hints that perhaps it is not, and that some other approach is really more appropriate for systematic study — perhaps one we have yet to define.

In this talk I will provide some overview of the challenges in cyber security, the arguments being made for exploration and definition of a science of cyber security, and also some of the counterarguments. The goal of the presentation is not to convince the audience that either viewpoint is necessarily correct, but to suggest that perhaps there is sufficient doubt that we should carefully examine some of our assumptions about the field.

Advertisements

9 Responses to “Crowdsource comments on a talk”

  1. skip saunders Says:

    It isn’t a science …because no laws of nature are involved (other than the human law that allows people to keep evolving criminal strategies)

    Cyber security is an objective… but it (just like a zero burglary rate) is an objective that can never be met. Instead (just like burglary and other crimes) it is a social situation, governed by social science ( or perhaps I should say social “science”) relationships. The most notorious of which is: anydefense can always be overcome…and any offense always has a portion of its life-cycle during which no defense has been developed. The combination of these two situations means one can never be cyber secure…. one can only seek more profilaxis.

    Like

    • Alex Says:

      What an interesting definition of science! I happen to be optimistic that observation, experiment of “to secure”, “risk”, and other notional concepts can lead to development of systematic study.

      Like

      • skip saunders Says:

        You are right, I shouldn’t be concerned about whether something is a science or not. The basic challenge is: Achieving Cyber-security. My point is that such an achievement is not possible in absollute terms unless/until one can derive provability properties about anything involving software.

        In the meantime, all attempts involving cyber security need to acknowledge the lifecycle properties of cyber security: I.e. there is a period of time during which an offensive tactic has no corresponding defensive mechanism in place (or in most cases, even defined….because there is always a time during which a zero-day technique will be successful.)

        The trick might be to build sufficiently robust defensives that any attempt to breach the defenses will be identifiable… (nice in theory, but not practical in today’s computing environments) Even if defenses were good enough to make breach attempts observable — nothing has been said about identifying the source of the breach attempt, and establishing a demotivational mechanism (i.e. prison time).

        So long as anonymity is a feature of the internet, the social environment enables perpetual cyber offense technique evolution and application…without fear of retribution.

        (so, in a sense I’m backtracking a little and suggesting that a science might evolve if it concentrates on provably correct code…but in the meantime, the issue remains a social, motivational, economic challenge topic..where the only realistic course of action is to just “try harder”…..[yes, I’m a pessimist.] )

        Like

  2. Robert Brown Says:

    As a CISO, I have abstracted security down to a single word description: choice. Everything about security relates back to a choice or decision that was made. What software to use, how to configure it, how to patch it, what password a user elects to use, whether or not they use the same password multiple places, if they click on an email attachment, if they forward a sensitive document to a personal gmail account to work on it at home, etc etc. I fit more into the security economics camp than the scientific camp.

    The big question for me is why do people make bad choices, and how do we influence those choices to be different. I think (for the most part) people would want to make good decisions, but human nature gets in the way. We used to have a lot more traffic injury due to not wearing seatbelts than we do now because the cars nag you constantly to put the belt on. The trick is to create the right nags, at the right time, to model those choices to be better.

    OK so as to why we make bad choices, I see three reasons: ignorance, inertia, and apathy. Ignorance as to the consequences of the choice – DES is the same as AES right? Inertia as to not wanting to change at all. There might be a production risk to apply that security patch, no? Apathy being the worst of all – I know this is a high risk but I simply don’t care.

    I could go on and on about this, but modeling choice is effective and I’m spending a lot of time on it, from the simple password strength meters to things like color-coding different network folders so users don’t drag-drop high risk files into the wrong place. There are a lot of simple solutions out there and sometimes we elect to simply look beyond them.

    I hope some of that made sense in a short reply. In any event I hope the talk goes well and I definitely hope you are feeling better soon!

    Like

  3. Jim Harper Says:

    Economics.

    If you recognize that cybersecurity (like all security) is an exercise in risk management and tradeoffs, I think you end up with a better framework for thinking.

    I’m interested by what you treat as a problem statement: “The result has been repeated system failures, and a steady progression of new attacks and compromises.” Should the world NOT experience system failures and new attacks and compromises? Of course it should. The absence of a background, tolerable level of insecurity (which there is in physical security and every other thing), would indicate that too many societal resources are going into security and not enough into all the other things that society should prioritize.

    Like

  4. Andrew Martin Says:

    I like your abstract. A science of (cyber)security seems worth pursuing – though surely it will look more like behavioural science than computer ‘science’ . The whole risk-based approach seems to be being exposed as a fig-leaf which appears to systematize an approach without necessarily understanding the underlying artefacts. The failure of the banking sector truly to understand its risks – despite whole departments devoted to the subject – is eloquent commentary on the difficulty of that approach, where complex social systems are involved. Economics helps to explain a lot of behaviours – but economics itself is hardly uncontentious and its predictive power is better in some places than others.

    But as you imply, security is but a part of a bigger picture. Cyberspace is now tightly wrapped around – entwined with – the rest of society. The information revolution is taking hold. The re-thinking of how we learn, shop, interact socially, get entertained, do business, work as individuals, relate to our families, and more, continues apace. From one perspective, this is just a collection of technologies, some more useful than others. But taken together, the impact is profound. I don’t think anyone has a handle on the locus of that development, beyond observing a few phenomena around its periphery. Security is but one corner of this. I struggle to decide whether it is in fact a unifying theme, or just a cross-cutting form of disruption.

    Like

  5. Adam Shostack Says:

    You wrote “A solution, then, would seem to be to identify underlying scientific principles of cyber security…”

    I think you may be putting the cart before the horse here. Before we can identify principles, we need to observe what’s going wrong, and what isn’t. But we as security people choose to suppress that information. As such, we lack the data with which to formulate or test theories.

    A *much* longer version of this argument appears in “The New School of Information Security,” Andrew Stewart & myself, Addison Wesley, 2008.

    Like

  6. Bob West Says:

    Spaf,

    Not sure if this is helpful but here is one of my observations at a macro level. The cybersecurity world takes a reactive approach, instead of a proactive approach. We have lots of intrusion detection, anti-virus, anti-malware technologies but fail to take preventative measures. For example, if you look at the major cnci spends, they are mostly reactive spends. One of the basic reasons malware get onto systems is the basic lack of integrity in operating systems and web browsers. If the right level of integrity existed (think software quality), malware would not be an issue. Technologies such as Trusteer make up for this, but this type of technology should be part of the basic operating system and browser environment. The right level of security needs to be built into the core technologies we use and into business process.

    Hope this helps,

    Bob

    Like

  7. spaf Says:

    I wanted to post a follow-up — thanks for all the ideas. My talk went well, You can find a link to the abstract and slides of my talk here: http://spaf.cerias.purdue.edu/news.html#worldcomp

    Like


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: