Secure 360 is a two-day security conference held every May in Saint Paul, MN and I’ve been helping with the speaker selection for 4 years in a row. This year is different though because I volunteered to co-chair the program committee. This year we had over 130 submissions for just over 50 speaking slots and a loose committee of about 20 volunteers. We’ve had a variety of approaches over the years, but I couldn’t help think that there must be a better way to do it.
I decided to tap all my connections and see how many other speaker coordinators I could talk to – I mean someone, somewhere must have “the secret”. So I hit twitter and sent emails. I made quite a few connections and got to talk to some good people. It was great to learn how most of the current conferences select their speakers. But it was disappointing to learn that none of them had anything better. Turns out there is no secret sauce and a wet-finger-in-the-wind is about as good as we’ve got.
However, I did pick up a few nuggets here and there. I pieced some things together and came up with a process that I think worked pretty good this year. So rather than keep it a secret I wanted to share how we selected speakers this year.
Step 1: Guiding the guesswork
Selecting speakers is mostly guess work. Submissions come in from everywhere and chances are good speakers are selected solely on the material they submit. Asking the right questions and drawing information out of potential speakers is important. I’ve also learned from previous years that limiting how much is drawn is almost just as important. There are a minority of speakers that like to publish a paper in every field so put a hard limit on the information gathered if that’s possible.
We modified our fields slightly from previous years. We asked for a brief synopsis up front, this is what would get published in the conference material, but then we asked for a detailed outline. I was hoping for more information and I wanted to see if we could deduce quality from that field. Honestly, the addition of the detail only helped in about 30% of the submissions or so. One of the best things we did is had up to five “learning points” and I found myself referring to those often. It was more often than not that the learning points showed more of the speakers intention than the verbose detailed outline. I highly suggest both though.
We also tried to accepting links to online videos. I figured that the more proficient speakers will have something online and we could watch them in action. Truth is we had less than 5% use of that field and of those, I don’t think many volunteers watched them let alone checked.
Step 2: Pre-Voting
There’s another step in there, “get a bunch of submissions,” but I’m skipping that. We’re pretty lucky that we had some skill programming on our website. We were able to do some good things like set up online voting. I had relatively few instructions for the voting:
- Accept a single vote (1-5) per voter, per session
- Accept a comment (140 characters) and tie to person (for questions/follow up)
- minimize clicking by voters and display all the necessary information on one page
In some previous years votes were accepted on both speaker and session, some other years multiple votes were collected. like “relevance” and “speaker knowledge”. I highly recommend to keep the voting dead simple and I cannot stress that enough. When it comes to step 4, the voting is purely one data point of many and it was often overruled.
Accepting comments was a stroke of brilliance that I picked up in my connections. We end up doing final speaker selection in a single day and not everyone can (or will) attend that session. I wanted to give everyone a chance to be heard and those comments enabled input from people that were not able to get out of the office on a Friday.
In previous years we had to click around to look at speaker bio and click back to session information. Getting through and voting on submissions is a chore. Every extra click would be compounded by the quantity of submissions – it had be easy or people would get burnt out quicker and less votes (and comments) would come in.
Step 3: Compile the results
This was hard. We usually physically get together to pick speakers and we need the speaker information to do that. I ended up getting in touch with my roots and writing perl code. I got a full mysql dump of the database and I broke about every good rule for developers to pull out and present the information I thought folks wanted. I knew this was mostly a one-shot deal (except for perhaps next year) so I wrote quick-n-dirty. I think it was about 15-20 hours, but in reality, it had to be much more. My code spit out html, I then opened that in MS Word and did some final formatting.
I set up two sections in the material, “At a Glance” and “Detailed Sessions”. I wanted a way to compare sessions quickly and yet offer a reference for details. I assumed most people would stare at summary information so I tried to fit as much information as I could in there. I’ll change up some names and give an example and walk through it.
I wanted show both the total votes (in this case “John” got 3-3’s, 5-4’s and 3-5’s, 3 was “okay”) and the overall score (I weighted as 1, 2, 3, 5, 8 and displayed the mean). Under the title and speaker name, I put the comments. We had several very chatty people who couldn’t make the meeting and like I said, it was great to get their input even though they could not attend. Now one very useful thing I did is that I compiled the feedback from previous years and included that here. In this case, “John” was rated in the top 25% (compared to other speakers that year). Using that historical data proved to be very, very helpful. In this example there are several good things here to select John on (that “1” on the left is his initial ranking in this category).
Step 4: Select Speakers
Not a whole lot of special sauce here. Sit down and start picking people. Some conference organizers were lucky enough to do this at a bar. We met at 8am and went to almost 4pm, so drinking during it didn’t seem like a wise option. We had about 12 people show up and we broke off into small groups to tackle the different categories. Then looped back together and looked at conflicts and overlaps. All in all a good experience and I’m leaving out a lot of the details so if you’re selecting speakers, please reach out and I’ll talk your ear off.