Secure 360 is a two-day security conference held every May in Saint Paul, MN and I’ve been helping with the speaker selection for 4 years in a row. This year is different though because I volunteered to co-chair the program committee. This year we had over 130 submissions for just over 50 speaking slots and a loose committee of about 20 volunteers. We’ve had a variety of approaches over the years, but I couldn’t help think that there must be a better way to do it.
I decided to tap all my connections and see how many other speaker coordinators I could talk to – I mean someone, somewhere must have “the secret”. So I hit twitter and sent emails. I made quite a few connections and got to talk to some good people. It was great to learn how most of the current conferences select their speakers. But it was disappointing to learn that none of them had anything better. Turns out there is no secret sauce and a wet-finger-in-the-wind is about as good as we’ve got.
However, I did pick up a few nuggets here and there. I pieced some things together and came up with a process that I think worked pretty good this year. So rather than keep it a secret I wanted to share how we selected speakers this year.
Step 1: Guiding the guesswork
Selecting speakers is mostly guess work. Submissions come in from everywhere and chances are good speakers are selected solely on the material they submit. Asking the right questions and drawing information out of potential speakers is important. I’ve also learned from previous years that limiting how much is drawn is almost just as important. There are a minority of speakers that like to publish a paper in every field so put a hard limit on the information gathered if that’s possible.
We modified our fields slightly from previous years. We asked for a brief synopsis up front, this is what would get published in the conference material, but then we asked for a detailed outline. I was hoping for more information and I wanted to see if we could deduce quality from that field. Honestly, the addition of the detail only helped in about 30% of the submissions or so. One of the best things we did is had up to five “learning points” and I found myself referring to those often. It was more often than not that the learning points showed more of the speakers intention than the verbose detailed outline. I highly suggest both though.
We also tried to accepting links to online videos. I figured that the more proficient speakers will have something online and we could watch them in action. Truth is we had less than 5% use of that field and of those, I don’t think many volunteers watched them let alone checked.
Step 2: Pre-Voting
There’s another step in there, “get a bunch of submissions,” but I’m skipping that. We’re pretty lucky that we had some skill programming on our website. We were able to do some good things like set up online voting. I had relatively few instructions for the voting:
- Accept a single vote (1-5) per voter, per session
- Accept a comment (140 characters) and tie to person (for questions/follow up)
- minimize clicking by voters and display all the necessary information on one page
In some previous years votes were accepted on both speaker and session, some other years multiple votes were collected. like “relevance” and “speaker knowledge”. I highly recommend to keep the voting dead simple and I cannot stress that enough. When it comes to step 4, the voting is purely one data point of many and it was often overruled.
Accepting comments was a stroke of brilliance that I picked up in my connections. We end up doing final speaker selection in a single day and not everyone can (or will) attend that session. I wanted to give everyone a chance to be heard and those comments enabled input from people that were not able to get out of the office on a Friday.
In previous years we had to click around to look at speaker bio and click back to session information. Getting through and voting on submissions is a chore. Every extra click would be compounded by the quantity of submissions – it had be easy or people would get burnt out quicker and less votes (and comments) would come in.
Step 3: Compile the results
This was hard. We usually physically get together to pick speakers and we need the speaker information to do that. I ended up getting in touch with my roots and writing perl code. I got a full mysql dump of the database and I broke about every good rule for developers to pull out and present the information I thought folks wanted. I knew this was mostly a one-shot deal (except for perhaps next year) so I wrote quick-n-dirty. I think it was about 15-20 hours, but in reality, it had to be much more. My code spit out html, I then opened that in MS Word and did some final formatting.
I set up two sections in the material, “At a Glance” and “Detailed Sessions”. I wanted a way to compare sessions quickly and yet offer a reference for details. I assumed most people would stare at summary information so I tried to fit as much information as I could in there. I’ll change up some names and give an example and walk through it.
I wanted show both the total votes (in this case “John” got 3-3’s, 5-4’s and 3-5’s, 3 was “okay”) and the overall score (I weighted as 1, 2, 3, 5, 8 and displayed the mean). Under the title and speaker name, I put the comments. We had several very chatty people who couldn’t make the meeting and like I said, it was great to get their input even though they could not attend. Now one very useful thing I did is that I compiled the feedback from previous years and included that here. In this case, “John” was rated in the top 25% (compared to other speakers that year). Using that historical data proved to be very, very helpful. In this example there are several good things here to select John on (that “1” on the left is his initial ranking in this category).
Step 4: Select Speakers
Not a whole lot of special sauce here. Sit down and start picking people. Some conference organizers were lucky enough to do this at a bar. We met at 8am and went to almost 4pm, so drinking during it didn’t seem like a wise option. We had about 12 people show up and we broke off into small groups to tackle the different categories. Then looped back together and looked at conflicts and overlaps. All in all a good experience and I’m leaving out a lot of the details so if you’re selecting speakers, please reach out and I’ll talk your ear off.
There are about as many definitions of risk as people you can ask and I’ve spent far too much energy pursuing this elusive definition but I think I can say, I’ve reached a good place. After all my reading, pontifications and discussions I feel that I am ready to answer the deceptively simple question “how do you define risk?” with this very simple answer:
I don’t know.
Oh I can toss things out there like “the probable frequency and probable magnitude of future loss” from the FAIR methodology. I could also wax philosophically about how I *mostly* agree with Douglas Hubbard’s well developed definition of “A state of uncertainty where some of the possibilities involve a loss” (note: I *mostly* agree just to pretend that I know something Mr. Hubbard doesn’t).
But if I don’t know, how can I say that I’ve reached a good place pursuing a risk definition? Because I have accepted the ambiguity and I’ve realized that terminology and definitions exist simply to help communicate concepts or ideas. That’s where we should be spending our efforts, behind the definitions. In that light, I have come to believe that definitions don’t have to be 100% right, they simply have to be helpful. Take the definition of risk from ISO 31000: “the effect of uncertainty on objectives”. That sounds cool, even after thinking about it for a while, but when it comes to being helpful? Nope, not even close. I may have an objective of defining risk and I’m immersed in uncertainty but I wouldn’t call the effect of that uncertainty “risk”. If anything, that definition leaves me more confused than when I started.
There’s some good news though, problems in defining central terms isn’t unique to risk. Take this from Melanie Mitchell:
In 2004 I organized a panel discussion on complexity at the Santa Fe Institute’s annual Complex Systems Summer School. It was a special year: 2004 marked the twentieth anniversary of the founding of the institute. The panel consisted of some of the most prominent members of the SFI faculty…all well-known scientists in fields such as physics, computer science, biology, economics and decision theory. The students at the school…were given the opportunity to ask any question of the panel. The first question was, “How do you define complexity?” Everyone on the panel laughed, because the question was at once so straightforward, so expected, and yet so difficult to answer.
She goes on in her book to say “Isaac Newton did not have a good definition of force” and “geneticists still do not agree on precisely what the term gene refers to at the molecular level.”
I take comfort in these stories, we are not unique, we are not alone.
As we move forward in the pursuit of information risk, let’s stay focused on where the real work should be done: measuring and communicating risk. Let’s put a little less effort on defining it just yet. Don’t’ get me wrong, definitions are helpful, but let’s not get all wrapped up in the precision of words when we’re still struggling with the concepts they are describing.