Logo

    State of Cybercrime

    Join us for State of Cybercrime, where experts discuss the latest trends and developments in the world of cybercrime and provide insights into how organizations can protect themselves from potential threats. Sponsored by Varonis
    en-us186 Episodes

    People also ask

    What is the main theme of the podcast?
    Who are some of the popular guests the podcast?
    Were there any controversial topics discussed in the podcast?
    Were any current trending topics addressed in the podcast?
    What popular books were mentioned in the podcast?

    Episodes (186)

    Objective-See - Advanced MacOS Security Tools by Ex-NSA Hacker Patrick Wardle

    Objective-See - Advanced MacOS Security Tools by Ex-NSA Hacker Patrick Wardle

    Check out Objective-See: https://objective-see.com/

    Objective-See Twitter: https://twitter.com/objective_see

    Objective-See Patreon: https://www.patreon.com/objective_see

    While In Russia: Patrick's RSA talk on hacking journalists - 

    Patrick's Twitter: https://twitter.com/patrickwardle 

    This podcast is brought to you by Varonis, if you'd like to learn more check out the Cyber Attack Lab at https://www.varonis.com/cyber-workshop/

    ESP8266 - The Low-cost Wi-Fi Microchip with a Full TCP/IP Stack

    ESP8266 - The Low-cost Wi-Fi Microchip with a Full TCP/IP Stack

    Stefan's Site with links to all of his projects: https://spacehuhn.io/

    Twitter: https://twitter.com/spacehuhn

    YouTube: https://www.youtube.com/channel/UCFmjA6dnjv-phqrFACyI8tw

    An overview of the ESP8266 https://www.espressif.com/en/products/hardware/esp8266ex/overview

    Stefan's Github https://github.com/spacehuhn

    ESP8266 Deauther 2.0 https://github.com/spacehuhn/esp8266_deauther

    WiFi Duck - Wireless injection attack Platform

    https://github.com/spacehuhn/WiFiDuck

    WiFi Satellite - monitoring and logging 2.4GHz WiFi Traffic

    This podcast is brought to you by Varonis, if you'd like to learn more check out the Cyber Attack Lab at https://www.varonis.com/cyber-workshop/

    State of Cybercrime
    en-usNovember 22, 2019

    Grabify - the IP Logging, Honeypot Tracking URL Shortener

    Grabify - the IP Logging, Honeypot Tracking URL Shortener

    A honeypot is a tool that acts as bait, luring an attacker into revealing themselves by presenting a seemingly juicy target. In our first Security Tools podcast, we explore a free tool called Grabify that can gather information about scammers or attackers when they click on a honeypot tracking link.

    https://grabify.link/

    https://jlynx.net/

    https://twitter.com/grabifydotlink

    This podcast is brought to you by Varonis, if you'd like to learn more check out the Cyber Attack Lab at https://www.varonis.com/cyber-workshop/

    Be the First to Know

    Be the First to Know

    We wanted you to be the first to know that next week; we will be back in this same feed with a new security podcast from Varonis.

    The new Security Tools podcast will keep you up to date with the most exciting and useful tools the Infosec community has to offer.

    Join us on the new show to hear from the researchers and hackers behind tools like Grabify, a link-based Honeypot service that unmasks scammers leveraging the same web tracking tactics used by most modern websites. We’ll find out why it’s so hard to stay anonymous online and show you how to use the power of tracking links to find the real location of an online scammer.

    See you next week.

    State of Cybercrime
    en-usNovember 05, 2019

    Changing User Behavior

    Changing User Behavior

    Summer is approaching, and of course, that’s when we feel the most heat. However, for cybersecurity managers, they feel the heat all the time. They must be right every time because cybercriminals only have to be right once. So summer can potentially feel like it’s year-round for cybersecurity pros and it can cause job burnout.

    Another problem that managers face is the potential ineffectualness of cybersecurity awareness training. Learning and sharing interesting security information in a class is really wonderful and expansive for a user’s mind. However, if it doesn’t change a user’s behavior and he continues to click on links he shouldn't be clicking on, training might not be as helpful as it claims to be.

    Other articles discussed:

    Tool of the week: htrace.sh - simple Swiss Army knife for http/https troubleshooting and profiling.

    Panelists: Cindy Ng, Mike Buckbee, Kris Keyser, Kilian Englert

    Security and Technology Unleashed

    Security and Technology Unleashed

    Searching a traveler’s phone or laptop is not an extension of a search made on a piece of luggage. As former commissioner of Ontario Ann Cavoukian said, “Your smartphone and other digital devices contain the most intimate details of your life: financial and health records.”

    In general, it’s also dangerous to connect laws made in accordance with the physical world to the digital space. But even with GDPR that’s aimed to protect consumer data, the law hasn’t taken action against any major technology firms such as Google or Facebook.

    It seems our relationship with technology might get worse before it gets better.

    Other articles discussed:

    Tool of the week: Ghidra is a software reverse engineering (SRE) framework

    Professor Angela Sasse FREng on Human-Centered Security

    Professor Angela Sasse FREng on Human-Centered Security

    Lately, we’ve been hearing more from security experts who are urging IT pros to stop scapegoating users as the primary reason for not achieving security nirvana. After covering this controversy on a recent episode of the Inside Out Security Show, I thought it was worth having an in-depth conversation with an expert.

    So, I contacted Angela Sasse, Professor of Human-Centred Technology in the Department of Computer Science at University College London, UK. Over the past 15 years, she has been researching the human-centered aspects of security, privacy, identity and trust. In 2015, for her innovative work, she was awarded the Fellowship of the Royal Academy of Engineering(FREng) for being one of the best and brightest engineer and technologist in the UK.

    In part one of my interview with Professor Angela Sasse, we cover the challenges that CISOs have in managing risk while finding a way to understand what’s being asked of the user. And more importantly, why improving the usability of security can positively impact an organization’s profits.

    Transcript

    Cindy Ng: Since 1999, Professor Angela Sasse has researched and promoted the concept of having security that works with and for users and their organization. She accomplishes this by appealing to the bottom line. Her hallmark paper, "Users Are Not the Enemy," argues that security frameworks designed with the users are dangerous approach creates barriers that users must overcome in order to do their jobs, which makes it a resort intensive administrative burden for their organization.

    For her exceptional work in 2015, Professor Angela Sasse was awarded the Fellowship of the Royal Academy of Engineering as being one of the best and brightest engineers and technologists in the UK.

    I think what you're doing is multilayered, multifaceted, and you're targeting two very different fields where you're trying to think about how to design innovative technologies that are functional while driving the bottom line. So that's B2B and then also improve the well-being of individuals and society and that's B2C and the strategies of those two things are very different. So maybe to just peel the layers back to start from the beginning, your research focuses on human usability of security and perhaps privacy too. Maybe it might be helpful to define what usability encompasses.

    Angela Sasse: Okay. So, usability, there's a traditional definition, there's an, you know, International Standards Organization definition of it, and it says,"Usability is if a specified user group can use the mechanism to achieve their goals in a specified context of use." And that actually makes it really quite, quite complex, because what it's really saying is there isn't a sort of, like, hard-line measure of what's usable and what isn't. It's about the fit, how well it fits the person that's using it and the purpose they're using it for in the situation that they're using it.

    Cindy Ng: Usability is more about the user, the human and not necessarily the technology, it's, after all, just a tool. And we have to figure out a way to fit usability into the technology we're using.

    Angela Sasse: Yes, of course, and what it amounts to is that, of course, it's not economic. It wouldn't be economically possible to get a perfect fit for a 120 different types of interactions in situations that you do. What we generally do is we use four or five different forms of interaction, you know, that work well enough across the whole range of interactions that we do. So their locally optimal and globally optimal, so you could make a super good fit for different situations. But if you don't want to know about 120 different ways of doing something, so globally optimal is to have a limited set of interactions and symbols and things that you're dealing with when you're working with technology.

    So, security, however, one of the things that a lot of people overlook when it comes to security and usability is that from the user's point of view, security is always what usability people call a secondary task or enabling task. So this is a task I have to do to get to the thing I really want to do, and so the kind of tolerance or acceptance that people have for delays or difficulty is even less than with their sort of primary interactions.

    Cindy Ng: It's like a chore. For instance, an example would be I need to download an app, perhaps, in order to register for something.

    Angela Sasse: Yeah, and so what you want to do is, you know, you want to use the app for a particular purpose, and then if you basically have...if the user perceives that in order to be able to use the app, you know, all the stuff you have to do to get to that point is too much of a hurdle, then most of them would just turn around and say, "It's not worth it. I'm not going ahead."

    Cindy Ng: When it comes to the security aspect how does a CISO or an IT security admin decide that users are dangerous, and that if they only had the same knowledge that I have, that they would behave differently. Where does downloading the app or using a website intersect with the jobs of what a CISO does?

    Angela Sasse: CISO is trying to manage the risks, and some of the risks might affect the individual employee or individual customer as well. But other risks are really risks to the organization, and if something went wrong it wouldn't directly affect the employee or the customer. But I think what, a CISO or SysAdmin, I would say to them is, "You've got to understand what you are asking the user to do. You have to accept that you're a security specialist, and you are focused on delivering security, but you're the only person in the organization for whom security is a primary task.

    For everybody else, it's a secondary task. It's a hurdle they have to jump over in order to do what they've been trained for, what they are good at, what they're paid to do. And so it's in your best interest to make that hurdle as small as possible. You should effectively manage the risk, but you've got to find ways of doing it that no one really bothers, where you're really taking as little time and effort away from the people who have to do it. Because otherwise you end up eating all the profits. Right?"

    Angela Sasse: The more effort you're basically taking away from the main activity that people do, the more you're reducing the profits of the organization.

    Cindy Ng: You've done the research, and you're presenting them and you're interacting with CISOs and SysAdmins and how has the mindset evolved and also some of the push back. Can you provide some examples?

    Angela Sasse: Early on a lot of of the push back was really, well, people should do what they are told, and the other main push back is, "So, you're telling me, this is difficult or effortful to do for people. Can we give them some training?" The real push back is that they don't want to think about changing, making changes to the technology and to the way they are managing the risks. So their first thought is always, "How can I make people do what I want them to do." And so the very first big study that Adams and I did, we then subsequently...it's published in the paper, "Users Are Not the Enemy."

    So, this was a very big telecommunication company and when we said to them, "Look, your staff have between 16 and 64 different passwords, six digit pins and eight character passwords, complex, and you're telling them they have to have a different one and they can't write it down. And they were also expiring them every 30 days, so they had to change them every 30 days.

    And basically I said, "Nobody can do this." Then they said, "Okay, could they do it if we gave them some extra training?" And my response was, "Yes, and that would look like this, all your employees have to go on a one-year course to become memory athletes. Even when they come back, they're going to spend half an hour a day doing the memory techniques that you need to do in order to be able to recall all this stuff."

    And if you think about it that way, it's just absurd that rather than making changes to the password policy or providing easier to use authentication mechanism. Sometimes what's equally ridiculous is, so, like, "Can you give me a psychology test so I can screen out the people who are not compliant so that I can recruit people that are naturally compliant."

    That's bizarre. You need to recruit people who are good at the jobs that your business relies on, good at the stuff your business delivers. If you just recruit compliant and risk averse people, you're gonna go bust. So, you sometimes you have to really show the absurdity of the natural thinking that there is. There is this initial resistance to go, like, "I don't really want to change the way how I think about security, and I don't want to change the mechanisms I use."

    Cindy Ng: I think a lot of the CISOs and the SysAdmins are restricted too by the tools and the software, and they feel like they're confined and have to work within a framework, because their job is really technical. It's always about are you able to secure my network first over the human aspect of it. And I really like what you said about how phishing scam attackers understand more of the human element of security than security designers have. Can you elaborate more on that?

    Angela Sasse: I think... So, I'm working with some of the government here in the UK, with those government agencies that are responsible for security and for advising companies about security. And I think it's very interesting to see that they have concluded that CISOs need, and security practitioners, that they need to develop their soft skills and that they need to engage. They need to listen more, and they need to also learn how to...once they have listened, you know, and understand how they can provide a fit, then how they can persuade people of the need for change.

    You know, because part of the whole problem is if you reconfigure the mechanisms, and they're now easier to use without people still need to change their behavior. They still need to move on from existing habit to the new ones, and that can be a bit of a blocker for change, and you need to persuade people to embark on this journey of changing their existing habits. And for that you need soft skills, and you need to persuade them that I have now made it as easy as possible to use. Now your part, your responsibility is to change your existing habit towards this new secure one, you know, which is feasible to do. And it's not particularly onerous, but you need to work through that process of changing, learning a new habit.

    Cindy Ng: How long do they want it to be? How long does it actually take, and how has their mindset evolved?

    Angela Sasse: Most of them now realize that their role is really is to be a cheerleader for security, not, you know, the kind of the old school that they are some sort of gatekeeper who can stop everybody. So most of them now do realize.

    Cindy Ng: When did that happen?

    Angela Sasse: I think it's happened...it's only very recent. For the majority of them it happened in the last, maybe, four or five years. Some still haven't gotten there, but quite a few of them, and, you know, I've seen some very...I mean, if I go to Infosec for instance to meet people there who've really done a very good job.

    And I think, actually, say if you, for instance, look at the born digital companies. I think they generally do...they do very well. You know, if you look at Google, Amazon, Facebook, eBay, they've generally worked very hard to secure their business without...and they know that it would be a threat to their business if people couldn't use the security or found the security to be cumbersome. And I think they've actually done a good job, pretty good job, to look at how you can make it easier to use. So I think those companies are currently leading the charge.

    But I've seen this happen in a couple of other... So, I think basically, other companies that have very big customer bases, you know, sort of experiences that they get with that that they realize that they have to make it easier for the customers to access services or use devices. Those lessons then also tend to filter through to how they are designing security for their own employees.

    So, you know, if you look at mobile phone companies and the television companies, you know, cable and satellite TV companies, I think they've really internalized...so the people working there really have quite a modern outlook. I think next coming around the corner is the big software and technology development companies. They have started to...so companies like Microsoft have started to realize this as well.

    Statistician Kaiser Fung: Fishy Stats (Part 3)

    Statistician Kaiser Fung: Fishy Stats (Part 3)

    Over the past few weeks, Kaiser Fung has given us some valuable pointers on understanding the big data stats we are assaulted with on a daily basis.  To sum up, learn the context behind the stats — sources and biases — and know that the algorithms that crunch numbers may not have the answer to your problems.

    In this third  segment of our podcast, Kaiser points out all the ways the stats can trick us through its inherently random nature — variability in stats-speak.

    Transcript

    Cindy Ng: In part one and two with our interview with Kaiser Fung, we discussed the process behind a numerical finding, then focused on accuracy. In our last installment, Kaiser reveals one last way to cultivate numbersense.

    Your third point is to have a nose for doctored statistics. And for me, it's kind of like…if you don't know what you don't know? Kind of like I was surprised to read in the school rankings chapter in Number Sense that different publications have different rules in ranking. And then I didn't know that like reporting low GPAs as not available, it's a magic trick that causes a median GPA to rise. And so if I didn't know this, I would just use any number in any of these publications and use it in my marketing. How do I cultivate a nose for doctored statistics?

    Kaiser Fung: Well, I think...well, for a lot of people, I think it would involve like reading certain authors, certain people who specializes in this sort of stuff. I'm one of them but there are also others out there who have this sort of skepticism and they will point out how...you know, I mean I think it's all about figuring out how other people do it and then you can do it even to just follow the same types of logic. Often times, it involves sort of like, there are multiple stages to this. So there's the stage of can you smell something fishy? So it's sort of this awareness that, "Okay, do I want to believe this or not?"

    And then there's the next stage of, do you...once you smell something, do you know where to look, how to look, how do you investigate it? So usually when you smell something that means that you have developed an alternative hypothesis or interpretation that is different from what the thing you're reading. So in sort of this scientific method, what we want to do at that point is to try to go out and find cooperating evidence. So then the question becomes do you have this notion of what kinds of things I could find that could help you decide whether you're right or whether the original person is right? And here the distinction is really around if you're more experienced, you might be able to know if I am able to find this information that will be sufficient for me to even validate this or to fortify that. So you don't necessarily go through the entire analysis. Maybe you just find a shortcut to get to a certain point.

    And then the last stage is, that's the hardest to achieve and also not always necessary but it's sort of like okay if you no longer believe in what was published, how do you develop your alternative argument? So that requires a little more work and that's the kind of thing that I try to train my students to do. So often times when I set very open-ended type problems for them, you can see these people in different stages. Like there are people who don't recognize where the problems are, you know, just believe what they see. There are people who recognize the problems and able to diagnose what's wrong. Then there are ones that can diagnose what's wrong and they will have...you know, whether it's usually through looking at some other data or some other data points, they can decide, okay, instead of making the assumptions that the original people made which you no longer believe, I'm going to make a different set of assumptions. So like make this other set of assumptions, what would be the logical outcome of the analysis? So I think it's something that can be trained. It's just difficult in the classroom setting in our traditional sort of textbook lecture style. That type of stuff is very difficult to train.

    Andy Green: Something you said about sort of being able to train ourselves. And one thing that, it comes up in your books a lot, is that a lot of us don't have the sense of variability in the data. We don't understand what that means or what it...if we were to sort of put it out on a bar chart, we don't have that picture in our mind. And one example that you talk about I think on a blog post in something as marketers, we do a lot is A/B testing. And so we'll look at, we'll do a comparison of changing one website slightly and then testing it and then noticing that maybe it does better, we think. And then when we roll it out, we find out it really doesn't make too much of a difference. So you talked about reasons why something might not scale up in an A/B test. I think you wrote about that for one of the blogs. I think it was Harvard Business Review,

    Kaiser Fung: ...I'm not sure about whether we're saying the same things. I'm not quite exactly remembering what I wrote about there. But from an A/B testing perspective, I think there are lots of little things that people need to pay attention to because ultimately what you're trying to do is to come up with a result that is generalizable, right? So you can run your test in a period of time but in reality, you would like this effect to hold, I mean that you'll find anything over the next period of time.

    Now, I think both in this case as well as what I just talked about before, one of the core concepts in statistics is not just understanding it's variability. Whatever number is put in front of you, it's just a, at the moment sort of measurement, right? It's sort of like if you measure your weight on the same scale it's going to fluctuate, morning, night, you know different days. But you don't have this notion that your weight has changed. But the actual measurement of the weight, even though if it's still the same weight, will be slightly different.

    So that's the variability but the next phase is understanding that there are sources of variability. So there are many different reasons why things are variable. And I think that's sort of what we're getting into. So in the case of A/B testing, there are many different reasons why your results have been generalized. One very obvious example is that what we call the, we say that there's a drift in population. Meaning that especially websites, you know, a site changes over time. So even if you keep stable during the test, when you roll it forward it may have changed. And just a small change in the same part of the website could actually have a very large change in the type of people that comes to the page.

    So I have done...in the past, I've done a lot of A/B testing around kind of what you call the conversion funnel in marketing. And this is particularly an issue if you...let's say you're testing on a page that is close to the end of the funnel. Now, people do that because that's the most impactful place because the conversion rates are much higher in those pages. But the problem is because it's at the end of many steps. Anything that changed in any of the prior steps, it's going to potentially change the types of people ended up on your conversion page.

    So that's one reason why there are tests that test variability in the type of people coming to your page. Then even if the result worked during a test, it's not going to work later. But there's plenty of other things including something that people often times fail to recognize which is the whole basis of A/B testing is you are randomly placing people into more pockets. And the randomization, it's supposed to on average tell you that they are comparable and the same. But random while it will get you there almost all of the time but you can throw a coin 10 times and get 10 heads. But there's a possibility that there is something odd about that case.

    So another problem is what is your particular test had this weird phenomenon? Now, in statistics, we account for that by putting error box around these things. But it still doesn't solve the problem that that particular sample was a very odd sample. And so one of the underlying assumptions of all the analysis in statistics is that you're not analyzing that rare sample. That rare sample is kind of treated as part of the outside of normal situation. So yeah, there are a lot of subtlety in how you would actually interpret these things. And so A/B testing is still one of the best ways of measuring something. But even there, there are lots of things that you can't tell.

    I mean, I also wrote about the fact that sometimes it doesn't tell you...we'd like to say A/B testing gives you cause-effect analysis. It all depends on what you mean by cause-effect because even the most...for a typical example, like the red button and the green button, it's not caused by the color. It's like the color change did not cause anything. So there are some more intricate mechanisms there that if you really want to talk about cause, you wouldn't say color is a cause. Although in a particular way of interpreting this, you can say that the color is the cause.

    Andy Green: Right, right.

    Cindy Ng : It really just sounds like at every point you have to ask yourself, is this accurate? Is this the truth? It's a lot more work to get to the truth of the matter.

    Kaiser Fung: Yes. So I think when people sell you the notion that somehow because of the volume of the data everything becomes easy, I think it's the opposite. I think that's one of the key points of the book. When you have more data, it actually requires a lot more work. And going back to the earlier point which is that when you have more data, the amount of potentially wrong analysis or coming to the wrong conclusion is exponentially larger. And a lot of it is because of the fact that most analysis, especially with data that is not experimental, it's not randomized, not controlled, you essentially you rely on a lot of assumptions. And when you rely a lot on assumptions, it's the proverbial thing about you can basically say whatever the hell you want with this data.

    And so that's why I think it's really important for people when especially for those people who are not actually in this business of generating analysis, if you're in the business of consuming analysis, you really have to look out for yourself because you really could, in this day and age, could say whatever you want with the data that you have.

    Cindy Ng: So be a skeptic, be paranoid.

    Kaiser Fung: Well the nice thing is like when they're only talking about the colors of your bicycles and so on, you can probably just ignore and not do the work because it's not really that important to the problem. But on the other hand, when you...you know, in the other case that is ongoing which is the whole Tesla autopilot algorithm thing, right? Like in those cases and also when people are now getting into healthcare and all these other things where your potential...there's a life and death decision, then you really should pay more attention.

    Cindy Ng: This is great. Do you have any kind of final thoughts in terms of Numbersense?

    Kaiser Fung: Well, I'm about...I mean, this is a preview of a blog post that I'm going to put out probably this week. And I don't know if this works for you guys because this could be a bit more involved but so here's the situation. I mean, it's again that basically reinforces the point that you can easily get fooled by the data. So my TA and I were reviewing a data set that one of our students is using for their class projects. And this was basically some data about the revenue contributions of various customers and some characteristics of the customers. So we were basically trying to solve the problem of is there a way to use these characteristics to explain why the revenue contributions for different customers have gone up or down?

    So we've spent a bit of time thinking about it and we eventually come up with a nice way of doing it. You know, it's not an obvious problem, so we have a nice way of doing it. We thought that actually produced pretty nice results. So then we met with the student and pretty much the first thing that we learned from this conversation is that, oh, because this is for proprietary data, all the revenue members were completely made up. Like there is some, this thing, formula or whatever that she used to generate the number.

    So that's sort of the interesting sort of dynamic there. Because on the one hand, like obviously all of the work that we spent kind of put in creating this model and then the reason why we like the model is that it creates a nicely interpretable results. Like it actually makes sense, right? But it turns that yes, it makes sense in that imaginary world but it really doesn't have any impact on reality, right? So I think that's the...and then the other side of this which I kind of touch upon in my book too is well, if you were to just look at the methodology of what we did and the model that we built, you would say we did a really good work. Because we applied a good methodology, generate it, quick results.

    So the method and the data and then your assumptions, I mean all these things play a role in this ecosystem. And I think that...so going back to what I was saying today, I mean it's the problem is all these data. I think we have not spent sufficient time to really think about what are the sources of the data, how believable is this data? And in this day and age, especially with marketing data, with online data and all that, like there's a lot of manipulation going on. There are lots of people who are creating this data for a purpose. Think about online reviews and all other things. So on the analysis side, we have really not faced up to this issue. We just basically take the data and we just analyze and we come up with models and we say things. But how much of any of those things would be refuted if we actually knew how the data was created?

    Cindy Ng: That's a really good takeaway. You are working on many things, it sounds like. You're working on a blog, you teach. What else are you working on these days?

    Kaiser Fung: Well, I'm mainly working on various educational activities that are hoping to train the next generation of analysts and people who look at data that will hopefully have...the Numbersense that I want to talk about. I have various book projects in mind which I hope to get to when I have more time. And from the Numbersense perspective, I'm interested in exploring ways to describe this in a more concrete way, right? So there this notion of...I mean, this is a general ecosystem of things that I've talked about. But I want a system that ties it a bit. And so I have an effort ongoing to try to make it more quantifiable.

    Cindy Ng: And so if people want to follow what you're doing, what is your Twitter handle on your website?

    Kaiser Fung: Yes, so my Twitter is @junkcharts. And that's probably where most of my, like in terms of updates that's where things go. I have a personal website called just kaiserfung.com where they can learn more about what I do. And then I try to update my speaking schedule there because I do travel around the country, speak at various events. And then they will also read about other things that I do like for corporations that are mostly around, again, training managers, training people in this area of statistical reasoning, data visualization, number sense and all that.

    We’d Love to Upgrade, But…

    We’d Love to Upgrade, But…

    It’s great to be Amazon to only have one on-call security engineer and have security automated. However, for many organizations today, having security completely automated is still an aspirational goal. Those in healthcare might would love to upgrade, but what if you’re using a system that’s FDA approved, which makes upgrading a little more difficult. What if hackers were able to download personal data from a web server because many weren’t up-to-date and had outdated plugins. Meanwhile, here’s a lesson from veteran report, Brian Krebs on how not to acknowledge a data breach.

    By the way, would you ever use public wifi and do you value certificates over experience?

    Statistician Kaiser Fung: Accuracy of Algorithms (Part 2)

    Statistician Kaiser Fung: Accuracy of Algorithms (Part 2)

    In part oneof our interview with Kaiser, he taught us the importance of looking at the process behind a numerical finding.

    We continue the conversation by discussing the accuracy of statistics and algorithms. With examples such as shoe recommendations and movie ratings, you’ll learn where algorithms fall short.

    Transcript

    Cindy Ng: In part one, Kaiser taught us the importance of looking at the process behind a numerical finding. And today, we’ll to continue in part two on how to cultivate numbersense.

    Kaiser, do you think algorithms are the answer. And when you’re looking at a numerical finding, how do you know what questions to ask?

    Kaiser Fung: So I think...I mean, they are obviously a big pile of questions that you ask but I think that the most important question not asked out there is the question of accuracy. And I've always been strucken, I keep mentioning to my blog readers this, is that if you open up any of the articles that are written up, whether the it's the New York Times, Wall Street Journal, you know all these papers have big data articles and they talk about algorithms, they talk about predictive models and so on. But you can never find a quantified statement about the accuracy of these algorithms.

    They would all qualitatively tell you that they are all amazing and wonderful. And really it all starts with understanding accuracy. And in the Numbersense book, I addressed this with the target example of the tendency models. But also in my previous book, I talk in the whole thing around steroids and also lie detector testing, because it's all kind of the same type of framework. It's really all about understanding the multiple different ways of measuring accuracy. So starting with understanding false positive and false negative. But really they are all derived with other more useful metrics. And you'll be shocked how badly these algorithms are.

    I mean it's not that...like for a statistical perspective, they are pretty good. I mean, I try to explain to people, too. It's not that we're all kind of snake oil artist that we...these algorithms do not work at all. I mean, usually, they work if you were to compare with not using the algorithm at all. So you actually have incremental improvements and sometimes pretty good improvements over the case of not using an algorithm.

    Now, however, if the case of not using the algorithm leads to, let's say 10% accuracy, and now we have 30% accuracy, you would be three times better. However, 30% accuracy still means that 70% of the time you got the wrong thing, right? So there's an absolute versus relative measurement here that's important. So once you get into that whole area, it's very fascinating. It's because usually the algorithms also do not really make decisions and they are specific decision rules that are in place because often times the algorithms only calculate a probability of something.

    So by analogy, the algorithm might tell you that there's a 40% chance of raining tomorrow. But somebody has to create a decision rule that says that, you know, based on...I mean, I'm going to carry umbrella if it's over 60%...So there's all these other stuff involved. And then you have to also understand the soft side of it which is the incentive of the various parties to either go one or the other way. And the algorithm ultimately reflects the designer's because the algorithm will not make that determination of whether you should bring an umbrella since … however, it's over 60% or under 60%. All it can tell you is that for today it's 40%.

    So I think this notion that the algorithm itself is running on its own, it's false anyway. And then so once you have human input into these algorithms, then you have to also have to wonder about what the humans are doing. And I think in a lot of these books, I try to point out that what also complicates it is that in every case, including the case of Target, there will be different people coming from this in angles where they are trying to optimize objectives that are conflicting.

    That's the beginning of this...that sort of asking the question of the output. And I think if we start doing that more, we can avoid some of this, I think a very reticent current situation that runs into our conversation here is this whole collapse of this…company. I'm not sure if you guys have been following that.

    Well, it's an example of somebody who's been solving this algorithm people have been asking. Well, a lot of people have not been asking for quantifiable results. The people have been asking for quantifiable results have been basically pushed back and, you know, they refused all the time to present anything. And then, at this point, I think it's been acknowledged that it's all...you know, empty, it's hot air.

    Andy Green: Right, yeah. You had some funny comments on, I think it was on your blog about, and this is related to these algorithms, about I guess buying shoes on the web. On, I don't know, one of the website. And you were always saying, "Well," they were coming up with some recommendations for other types of items that they thought you would be interested in. And what you really wanted was to go into the website and at least, when you went to buy the shoe, they would take you right to the shoe size that you ordered in the past or the color that you ordered.

    Kaiser Fung: Right, right, yes.

    Andy Green: And it would be that the simple obvious thing to do, instead of trying to come up with an algorithm to figure out what you might like and making suggestions...

    Kaiser Fung: Yeah. So I think there are many ways to think about that. Part of it is it's that often times the most unsexy problems are the most impactful. But people tend to focus on the most sexy problems. So in that particular case, I mean the whole article was about that the idea is that what makes prediction inaccurate is not just the algorithm being bad...well I mean the algorithms are often times actually, are not bad. It is that the underlying phenomenon that you are predicting is highly variable.

    So I love to use examples like movies since movie ratings was really big some time ago. So how you rate a movie is not some kind of constant. It depends on the mood, it depends on what you did. It depends on who you are with. It depends on so many things. And you hear the same person in movies and under different settings, you probably gave different ratings. So in that sense, it is very difficult for an algorithm to really predict how you're going to rate the movie. But what I was pointing out is that there are a lot of other types of things that these things could...the algorithms could predict that have essentially, I call invariable nature of property.

    And a great example of that is the fact that almost always, I mean it's like it's still not a hundred percent but 90% of the time you're buying stuff for yourself, therefore, you have certain shirt sizes, shoe sizes and so on. And therefore it would seem reasonable that they should just show you the things that is appropriate for you. And that's a...it's not a very sexy type of prediction. But it is a kind of prediction. And there are many, many other situations like that, you know. It's like if you just think about just even using an email software, there are certain things that you click on there… it's because the way it's designed is not quite the way you use it. So we have all the data available, they're measuring all this behavior, it could very well be predicted.

    So I feel like everybody who has done the same with the clicks every time because they're very much like, "Well, I just say what I mean."

    State of Cybercrime
    en-usApril 17, 2019

    Security on Easy Mode

    Security on Easy Mode

    Recently in the security space, there’s been a spate of contradicting priorities. For instance, a recent study showed that programmers will take the easy way out and not implement proper password security. Antidotally, a security pro in a networking and security course noticed another attendee who covered his webcam, but noticeably had his bitlocker recovery code is printed on a label attached to his screen. When protocols and skills compete for our attention, ironically, security gets placed on easy mode. In the real word, when attackers can potentially create malware that would automatically add “realistic, malignant-seeming growths to CT or MRI scans before radiologists and doctors examine them.” How about that time when ethical hackers were able to access a university’s student and staff personal data, finance systems and research networks? Perhaps more education and awareness might be needed to take security out of easy mode and bring it in real-time alerting mode.

    Statistician Kaiser Fung: Investigate The Process Behind A Numerical Finding (Part 1)

    Statistician Kaiser Fung: Investigate The Process Behind A Numerical Finding (Part 1)

    In the business world, if we’re looking for actionable insights, many think it's found using an algorithm.

    However, statistician Kaiser Fung disagrees. With degrees in engineering, statistics, and an MBA from Harvard, Fung believes that both algorithms and humans are needed, as the sum is greater than its individual parts.

    Moreover, the worldview he suggests one should cultivate is numbersense. How? When presented with a numerical finding, go the extra mile and investigate the methodology, biases, and sources.

    For more tips, listen to part one of our interview with Kaiser as he uses recent headlines to dissect the problems with how data is analyzed and presented to the general public.

    Transcript

    Cindy Ng: Numbersense essentially teaches us how to make simple sense out of complex statistics. However, statistician Kaiser Fung said that cultivating numbersense isn’t something you can learn in a book. But there are three things you can do. First is you shouldn’t take published data as face value. Second, is to know what questions to ask. And third is to have a nose for doctored statistics.

    And so, the first bullet is you shouldn't take published data at face value. And so like to me, that means it takes more time to get to the truth that matters, to the matter, to the issue at hand. And I'm wondering also like to what extent does the volume of data, big data, affects fidelity because that certainly affects your final result?

    Kaiser Fung: There are lots of aspects to this. I would say, let's start with the idea that, well it's kind of a hopeless situation because you pretty much have to replicate everything or check everything that somebody has done in order to decide whether you want to believe the work or not. I would say, well, in a way that's true but then over time you develop kind of a shortcut. Then part of it is that if you have done your homework on one type of study, then you could apply all the lessons very easily to a different study that we don't have to actually repeat all that.

    And also organizations and research groups tend to favors certain types of methodologies. So once you've understood what they are actually doing and what are the assumptions behind the methodologies, then you could...you know, you have developed some idea about whether if you're a believer in the assumptions or their method. Also the time, you know I have certain people who's work I have come to appreciate. I've studied their work, they share some of my own beliefs about how do you read data and how to analyze data.

    And it's this sense of, it also depends on who is publishing the work. So, I think that's part one of the question is encourage people to not just take what you're told but to really think about what you're being told. So there are some shortcuts to that over time. Going back to your other issue related to the volume of data, I mean I think that is really causing a lot of issues. And it's not just the volume of data but the fact that the data today is not collected with any design or plan in mind. And often times, the people collecting the data is really divorced from any kind of business problem or divorce from the business side of the host. And the data has just been collected and now people are trying to make sense of it. And I think you end up with many challenges.

    One big challenge is you don't end up solving any problems of interest. So I just had a read up my blog, that will be something just like this weekend. And this is related to somebody's analysis of the...I think this is Tour de France data. And there was this whole thing about, "Well, nowadays we have Garmin and we have all these devices, they're collecting a lot of data about these cyclists. And there's nothing much done in terms of analysis," they say.

    So which is probably true because again, all of that data has been collected with no particular design in mind or problem in mind. So what do they do? Well, they basically then say, "Well, I'm going to analyze the color of the bike that have actually won the Tour de France over the years." But then that's kind of the state of the world that we're in. We have the data then we try to portrait it by forcing it answer some questions that we’re supposed to create.

    And often times these questions are actually very silly and doesn't really solve any real problems, like the color of the bike is. I don't think anyone believe it impacts whether you win or not.

    I mean, that's just an example of the types of problems that we end up solving. And many of them are very trivial. And I think the reason why we are there is that when you just collect the data like that, you know, let's say you have a lot of this data about...I mean, let's assume that this data measures how fast the wheels are turning, the speed of your bike, you know, all that type of stuff. I mean, if the problem is that when you don't have an actual problem in mind, you don't actually have all of the pieces of the data that you need to solve a problem. And most often what you don't have is like an outcome metric.

    You have a lot of these sort of expensive data but there's no measurement of that thing that you want to impact. And then in order to do that, you have to actually merge in a lot of data or try to collect data from other sources. And you probably often times cannot find appropriate data so you're kind of stuck in this loop of not having any ability to do anything. So I think it's the paradox of the big data age is we have all these data but it is almost impossible to make it useful in a lot of cases. there are many other reasons why the volume of data is not helping us. But I think...what flashed in my head right now because of … is that one of the biggest issues is that the data is not solving any important problems.

    Andy Green: Kaiser, so getting back to what you said earlier about not sort of accepting what you're told, and I'm also now become a big fan of your blog, Junk Charts. And there was one, I think it's pretty recent, you commented on a New York Times article on CEO executives, CEO pay.
    And then you actually sort of looked a little deeper into it and you came to sort of an opposite conclusion. In fact, can you just talk about that a little bit because the whole approach there is kind of having to do with Numbersense?

    Kaiser Fung: Yeah. So basically what happened was there was this big headline about CEO pay. And it was one of these sort of is counter-intuitive headlines that basically said, "Hey, surprise..." Sort of a surprise, CEO pay has dropped. And it even gives a particular percentage and I can't remember what it was in the headline. And I think the sort of Numbersense part of this is that like when I read something like that, because it's sort of like the...for certain topics like this particular topic since I have an MBA and I've been exposed to this type of analysis, so I kind of have some idea, though it's some preconceived notion in my head about where CEO pay is going. And so it kind of triggers a bit of a doubt in my head.

    So then what you want to do in these cases, and often times, I think this is an example of very simple things you can do, If you just click on the link that is in the article and go to the original article and start reading what they say, and in this particular case, you actually only need to read like literally the first two bullet points of the executive summary of the report. Because then immediately you'll notice that actually CEO pay has actually gone up, not down. And it all depends on what metric people use it.

    And that they're both actually accurate from a statistic perspective. So, the metric that went up was the median pay. So the middle person. And then the number that went down was the average pay. And then here you basically need a little bit of statistical briefing because you have to realize that CEO pay is an extremely skewed number. Even at the very top, I think they only talk about the top 200 CEOs, even the very top the top person is making something like twice the second person. Like, this is very, very steep curve. So the average is really meaningless in this particular case and the median is really the way to go.

    And so, you know, I basically blogged about it and say, you know, that that's a really poor choice of a headline because it doesn't represent the real picture of what is actually going on. So that's the story. I mean, that's a great...yes, so that's a great example of what I like to tell people. In order to get to that level of reasoning, you don't really need to take a lot of math classes, you don't need to know calculus, you know...I think it's sort of the misnomer perpetuated by many, many decades of college instruction that statistics is all about math and you have to learn all these formulas in order to go anywhere.

    Andy Green: Right. Now, I love the explanation. And also, it seems that if the Times had just shown a bar chart and it would have been a little difficult but what you're saying is that at the upper end, there are CEOs making a lot of money and that they just dropped a little bit. And correct me if I'm wrong, but everyone else did better, or most like 80% of the CEOs or whatever the percentile is, did better. But those at the top, because they're making so much, lost a little bit and that sort of dropped the average. But meanwhile, if you polled CEOs, whatever the numbers, 80% or 90% would say, "Yes, my pay has gone up."

    Kaiser Fung: Right. So yeah. So I did look at the exact numbers there. I don't remember what those numbers are but in conceptually speaking, given this type of distribution, it's possible that just the very top guy having dropped by a bit will be sufficient to make the average move. So the concept that the median is the middle guy has actually moved up. So what that implies is that the bulk, you know, the weight of the distribution has actually gone up.

    There are many different actual numbers that made this in levels of aspect that you can talk about. That's the first level of getting the idea that you rarely talk in the median. And if you really want to dig deeper, which I did in my blog post, is that you also have to think about what components drive the CEO pay, because if the accounting, not just the fixed-base salary but maybe also bonuses and also maybe they even price in any of the stock components and you know the stock components are going to be much more volatile.

    I mean it all points to the fact that you really shouldn't be looking at the averages because it's now so affected by all these other ups and downs. So to me, it's a basic level of statistical reasoning that unfortunately hasn't seem to have improved in the journalistic world. I mean, even in this day and age when there's so much data, they really need to improve their ability to draw conclusions. I mean,...that's a pretty simple example of something that can be improved. Now we also have a lot of examples of things that are much more subtle.

    I'd like to give an example, a different example of this, and it also comes from something that showed up in the New York Times some years ago. But this is a very simple scatter plot that was plotting or trying to explain or trying to correlate the average happiness of people in different countries. And that's typically measured by survey results. So you base your happiness from a scale of zero to ten or stuff like that. And then they want to correlate that with the what they call the progressiveness of the tax system in each of these countries.

    So,the thing that people don't understand is by making this scatter plot, you have actually imposed upon your reader a particular model of the data. And in this particular case, it is the model that says that happiness can be explained by just one factor which is the tax system. So in reality, they are a gazillion other factors that affects somebody's happiness. And you really...and if you know anything about statistics, we would learn that it multivariable regression which would actually control all the other factors. But when you do a scatter plot, you haven't adjusted for anything else. So it's like the very simple analysis could be extremely misleading.

    State of Cybercrime
    en-usApril 02, 2019

    The Making of the Modern CISO

    The Making of the Modern CISO

    Should CISOs use events or scenarios to drive security, not checklists? It also doesn’t matter how much you spend on cybersecurity if ends up becoming shelfware. Navigating one’s role as a CISO is no easy feat. Luckily, the path to becoming a seasoned CISO is now easier with practical classes and interviews. But when cybersecurity is assumed to not be not very important. Does that defeat the leadership role of a CISO?

    Panelists: Cindy Ng, Sean Campbell, Mike Buckbee, Kris Keyser

    Security Expert and "Hacked Again" Author Scott Schober" (Part 2)

    Security Expert and "Hacked Again" Author Scott Schober" (Part 2)

    Scott Schober wears many hats. He's an inventor, software engineer, and runs his own wireless security company. He's also written Hacked Again, which tells about his long running battle against cyber thieves. Scott has appeared on Bloomberg TV, Good Morning America, CNBC, and CNN.

    We continue our discussion with Scott. In this segment, he talks about the importance of layers of security to reduce the risks of an attack. Scott also points out that we should be careful about revealing personal information online. It's a lesson he learned directly from legendary hacker Kevin Mitnick!

    Transcript

    Andy Green: So speaking of the attack that the Mirai...I'm not sure if I'm pronouncing that right...attack from last week, I was wondering if, can cell phones be hacked in a similar way to launch DDoS attacks? Or that hasn't happened yet? I was just wondering if...with your knowledge of the cellphone industry?

    Scott Schober: Absolutely. I mean, to your point, can cell phones be attacked? Absolutely. That's actually where most of the hackers are starting to migrate their attacks toward a cell phone. And why is that, especially they're aiming at Android environment. Excuse me. It's open-source. Applications are not vetted as well. Everybody is prone to hacking and vulnerable. There are more Android users. You've got open-source, which is ideal for creating all kinds of malicious viruses, ransomware, DDoS, whatever you want to create and launch. So that's their preferred method, the easiest path to get it in there, but Apple certainly is not prone to that.

    The other thing is that mobile phone users are not updating the security patches as often as they should. And that becomes problematic. It's not everybody, but a good portion of people are just complacent. And therefore hackers know that eventually, everybody's old Windows PC will be either abandoned or upgraded with more current stuff. So they'll target the guys that are still using old Windows XP machines where there's no security updates and they're extremely vulnerable, until that dries up. Then they're gonna start migrating over to mobile devices...tablets, mobile phones...and really heavily increase the hacks there. And then keep in mind why. Where are you banking? Traditionally everybody banked at a physical bank or from their computer. Now everybody's starting to do mobile banking from their device...their phone. So where are they gonna go if they want to compromise your credit card or your banking account? It's your mobile device. Perfect target.

    Andy Green: Yeah. I think I was reading on your blog that, I think, your first preference is to pay cash as a consumer.

    Scott Schober: Yes. Yes. Yep.

    Andy Green: And then I think you mentioned using your iPhone next. Is that, did I get that right?

    Scott Schober: Yeah, you could certainly..."Cash is king," I always say. And minimize. I do...I probably shouldn't say it...but I do have one credit card that I do use and monitor very carefully, that I try to use only at secure spots where I know. In other words, I don't go to any gas station to get gas and I don't use it for general things, eating out. As much as I can use cash, I will, to minimize my digital footprint and putting my credit out there too much. And I also watch closely, if I do hand somebody my credit card, I write on the back of it, "Must check ID." And people sometimes...not always...but they'll say, "Can I see your ID?" Hand them my license. "Thank you very much." Little things like that go a long way in preventing somebody, especially if you're handing your credit card to somebody that's about to swipe it through a little square and steal your card info. When they see that, they realize, "Oh, gosh, this guy must monitor his statement quickly. He's asking for ID. I'm not gonna try to take his card number here." So those little tips go a long, long way.

    Andy Green: Interesting. Okay. So in the second half of the "Hacked Again" book, you give a lot of advice on, sort of, security measures that companies can take and it's a lot of tips that, you know, we recommend at Varonis. And that includes strong passwords. I think you mentioned strong authentication. Pen testing may have come up in the book as well. So have you implemented this at your company, some of these ideas?

    Scott Schober: Yes, absolutely. And again, I think in the book I describe it as "layers of security," and I often relate that to something that we physically can all relate to, and that's our house. We don't have, typically, a single lock on our front door. We've got a deadbolt. We've got a camera. We've got alarm stickers, the whole gamut. The more we have our defenses up, the more likely that a physical thief will go next door or down the block to rob us. The same is true in cyber-security. Layered security, so not just when we have our login credentials. It's our user name and a password. It's a long and strong password, which most people are starting to get, although they're not all implementing. We never reuse the same password or parts of a password on multiple sites because password reuse is a huge problem still. More than half the people still reuse their password, even though they hear how bad it is because we're all lazy. And having that additional layer, multi-factor authentication or two-factor authentication. That additional layer of security, be it when you're logging into your Gmail account or whatever and have a text go your phone with a one-time code that will disappear. That's very valuable.

    Messaging apps, since we deal a lot with the surveillance community and understanding how easy it is to look at content. For anything that is very secure, I will look at messaging apps. And what I look for in there is something like...The one I've been playing with and I have actually on my phone is Squealock. There, you do not have to provide your actual mobile phone number. Instead, you create a unique ID and you tell other people that you wanna text to and talk to, "Here's my ID." So nobody ever actually has your mobile phone number because once you give out your mobile phone number, you give away pieces of information about you. So I really strongly encourage people, think before they put too much information out. Before you give your phone number away. Before you give your Social Security number away if you're going to a doctor's office. Are you required to do that? The answer is no, you're not required to, and they cannot deny you treatment if you don't give them a Social Security number.

    Andy Green: Interesting. Yeah.

    Scott Schober: But yet everybody gives it.

    Scott Schober: So think very carefully before you give away these little tidbits that add up to something very quickly, because that could be catastrophic. I was at an event speaking two weeks ago down in Virginia, Norfolk, cyber-security convention, and one of the keynotes, they invited me up and asked if I'd be willing to see how easy it is to perform identity theft and compromise information on myself. I was a little reluctant, but I said, "Okay, everything else is out there," and I know how easy it is to get somebody's stuff, so I was the guinea pig, and it was, Kevin Mitnick performed. This is the world's most famous hacker, so it made it very interesting.

    Andy Green: Yes.

    Scott Schober: And within 30 seconds and at the cost of $1, he pulled up my Social Security number.

    Andy Green: Right. It's astonishing.

    Scott Schober: Scary. Scary. Scary.

    Andy Green: Yep, very scary. Yeah...

    Scott Schober: And any hacker can do that. That's the part that is kinda depressing, I think. So even though you could be so careful, if somebody really wants anything bad enough, there is a way to do it. So you wanna just put up your best defenses to minimize and hopefully they move on to the next person.

    Andy Green: Right. Yeah, I mean, we've seen hackers, or on the blog, we've written about how hackers are quite good at sort of doing initial hacks to get sort of basic information and then sort of build on that. They end up building really strong profiles. And we see some of this in the phishing attacks, where they seem to know a lot about you, and they make these phish mails quite clickable because it seems so personalized.

    Scott Schober: It can be very convincing. Yes.

    Andy Green: Very convincing. So there's a lot out there already on people. I was wondering, do you have any advice...? We're sort of pro-pen testing at Varonis. We just think it's very useful in terms of assessing real-world risks. Is that something...can you recommend that for small, medium businesses, or is that something that may be outside their comfort zone?

    Scott Schober: No, I do have to say, on a case-by-case basis, I always ask business owners to do this first. I say, "Before you jump out and get vulnerability assessment or pen testing, both of which I do normally recommend, analyze what value you have within your walls of your company." Again, like you mentioned earlier, good point, are you storing customer information? Credit card information? Account numbers? Okay, then you have something very valuable, not necessarily just to your business, but to your customers. You need to make sure you protect that properly. So how do you protect that properly, is by knowing where your vulnerabilities are for a bad guy to get in. That is very, very important. What pen tests and vulnerability assessments reveal are things that your traditional IT staff will not know. Or in a very small business, they won't even think of these things. They don't think about maybe updating, you know, your security patches on WordPress for your website or, you know, other basic things. Having the world's most long and strong password for your wireless access point. "Well, it's only my employees use it." That's what they think. But guess what? A hacker pulls into your lot after hours and they're gonna try some automated software that's gonna try to socially pull off the internet everything and anything about you and your company in case part of that is part of your password. And guess what? They have a high success ratio with some of these automatic programs to guess passwords. That is very scary to me. Or they may use social engineering techniques to try to get some of that information out of a disgruntled employee or an innocent secretary or whatever...we've all heard these extreme stories...to get into your computer networks and place malware on there. So that's how you really find out. You get an honest appraisal of how secure your company is. Yeah, we did it here. I was honestly surprised when I thought, "Wow, we've got everything covered." And then I was like, "What? We never would have thought of that." So there are some gotchas that are revealed afterward. And you know what, if it's embarrassing, who cares? Fix it and secure it and that'll protect your company and your assets.

    And again, you gotta think about IP. Some companies...our industry, we've got a lot of intellectual property here, that over 44 years as a company, that's our secret sauce. We don't want that ending up in other international markets where it could be used in a competitive area. So how do you protect that, is making sure your company is very, very secure. Not just physical security, because that is extremely important. That goes hand in hand. But even keeping your computer network secure. And from the top down, every employee in the organization realizes they're not part of the security problem. They're part of the security solution and they have a vested interest just to make sure that...yeah.

    Andy Green: Yeah, no, absolutely. We're on the same page there. So do you have any other final advice for either consumers or businesses on security or credit cards or...?

    Scott Schober: Again, I always like to make sure I resonate with people, people have the power to control their own life and still function and still have a relative level of security. They don't have to live in fear and be overly paranoid. Am I paranoid? Yes, because maybe an exceptional number of things keep happening to me and I keep seeing that I'm targeted. I had another email the other day from Anonymous and different threats and crazy things that keep unfolding. That makes you wonder and get scared. But do the things that are in your control. Don't put your head in the sand and get complacent, as most people tend to do. People say, "Well, just about everybody's been compromised. Why bother? It's a matter of time." Well, if you take that attitude, then you will be the next victim. But if you can make it really difficult for those cyber-hackers, at least with a clean conscience, you said, "I made them work at it," and hopefully they'll move on to the next target. And that's what my goal is, to really encourage people, don't give up. Keep trying, and even if it takes a little bit more time, take that time. It's well, well worth it. It's a good investment to protect yourself in the long run.

    Andy Green: No, I absolutely agree. Things like two-factor authentication on, let's say, Gmail or some of your other accounts and longer passwords. Just make it a little bit harder so they'll then move on to the next one. Absolutely agree with you.

    Scott Schober: Yeah, yeah. That's very true. Very true.

    Andy Green: Okay. Thank you so much for your time.

    Scott Schober: Oh, no, any time, any time. Thank you for the time. Really appreciate it and stay safe.

    Security Expert and "Hacked Again" Author Scott Schober" (Part 1)

    Security Expert and "Hacked Again" Author Scott Schober" (Part 1)

    Scott Schober wears many hats. He's an inventor, software engineer, and runs his own wireless security company. He's also written Hacked Again, which tells about his long running battle against cyber thieves. Scott has appeared on Bloomberg TV, Good Morning America, CNBC, and CNN.

    In the first part of the interview, Scott tells us about some of his adventures in data security. He's been a victim of fraudulent bank transfers and credit card transaction. He's also aroused the wrath of cyber gangs and his company's site was a DDoS target. There are some great security tips here for both small businesses and consumers.

    Transcript

    Andy Green: Scott Schober wears more than a few hats. Scott is President and CEO of Berkeley Varitronics, a company that makes wireless test and security solutions. He is also an inventor. The gadget that enforces no cell phone policies, that's one of his. He's a sought-after security speaker and has been interviewed on ABC News, Bloomberg TV, CNBC, CNN. And he's been on the other side of the security equation, having been hacked himself, and writing that experience in his book, "Hacked Again." So, we're excited to have Scott on this podcast. Thanks for coming.

    Scott Schober: Yeah, thanks for having me on here.

    Andy Green: Yeah, so for me, what was most interesting about your book "Hacked Again," is that hackers actively go after small, medium businesses, and these hacks probably don't get reported in the same way as an attack on, of course, Target or Home Depot. So, I was wondering if you could just talk about some of your early experiences with credit card fraud at your security company?

    Scott Schober: Yeah, I'd be happy to. My story, and what I'm finding, too, is not necessarily that different than many other small business owners. What perhaps I'm finding is more different is many small businesses and medium size business owners are somewhat reluctant to share the fact that they actually have had breach within their company. And often times, because they perhaps are embarrassed or maybe there's a brand that they don't wanna have tarnished, they're afraid customers won't come back to the well and purchase products or services from them. In reality... And I talk about this often about breaches, pretty much every week now, trying to educate and share my story with audiences and I always take a poll. And I am amazed, almost, now, everybody raises their hand that they've had some level of having their business compromised or personally compromised be it a debit card or credit card.

    So, it's something now that resonates, and a lot more people realize that it's frequent, and it almost becomes commonplace. And another card gets issued, and they have to dispute charges, and write letters, and go through the wonderful procedure that I've had to do. I think, with myself, it's happened more frequently unfortunately because, again, sharing tips and how-to and best practices with individuals, it kinda gets the hackers a little bit annoyed and they like to take on a challenge to see if they could be disruptive or send a message to those that are educating people how to stay safe, because obviously it makes their game a lot harder.

    And I'm not alone, I'm in good company with a lot of other security experts out there and in the cyber world that had been targeted. And we all share war stories and we're always got the target on our back, I guess it's safe to say. And with myself, it started with debit card, credit card, then eventually the checking account. Sixty-five thousand dollar was taken out. And I realized this was not just a coincidence. This is a targeted, focused attack against me, and it really hasn't stopped since. I wish I could say it has, but every week I'm surprised with something I find.

    Andy Green: Right.

    Scott Schober: Very scary. I have to just keep reinforcing what we're doing in making it safer to run our business and protect ourselves and our assets.

    Andy Green: Right. So, I was wondering if you had just some basic tips because I know you talked a lot...you had some credit card fraud early on. But some basic advice for companies that are using credit cards or e-commerce. Is there something like an essential tip in terms of dealing with credit card processing?

    Scott Schober: Yeah, yeah, absolutely. There's actually a couple things that I always share with people. Number one, a lot of it has to do with how well do you manage your finances, and this is basic 101 finances. When you have a lot of credit cards, it's hard to manage and hard to keep on top of looking at the statements or going online and making sure that there's no fraudulent activity. Regular monitoring of statements is essential. I always emphasize, minimize the number of cards you use. Maybe it's one card that you use, perhaps a second card you use for online purchases. Again, so it could be very quickly isolated and cleaned up if there is a compromise.

    It's ironic, the other day I was actually presenting at a cyber security show and I was about to go up on stage and my wife called me in a panic. She has one credit card in her own name that she took out many years ago, and she says, "You won't believe it, my card was compromised. How could this happen?" So here it is, I'm preaching to my own family and she's asking me how it happened. She was all embarrassed and frustrated. It's because if we're not regularly monitoring the statement and not careful where we're shopping, we just increase the odds. It's a numbers game. So, really, minimizing and being very careful where we shop, especially online. If we shop for the best price, the best bargain, oftentimes there will be a site with the cheapest price, that's a telltale sign there's gonna be stolen credit card there. Go to name brand stores online, you have a much, much more successful chance that you're not gonna be compromised with your credit card.

    Andy Green: Right. So, that's actually some good advice for consumers, but what about for vendors because as a company, you were taken advantage of. I think I have a note here of $14,000 charge?

    Scott Schober: You're exactly right, yes. That's a little different. That particular charge, just to clarify, that was somebody that was purchasing our equipment and provided stolen credit card to purchase equipment. So there the challenge is how do you vet somebody that provides... Somebody that you don't see face-to-face or don't know personally, especially in another country, how do you make sure that that customer's legit? And I've done a couple simple things to do that. In fact, I had one earlier today, I actually did. Number one, pick up the phone and ask a lot of questions, verify that they are who they say they are, what their corporate address is. Make sure you're talking to a person in the accounting department if it's a larger company. Try to vet them and make sure they're legit, go online and see. And there are fake websites and there are fake company profiles and things. But sometimes crisscrossing, you do a quick Google search, go onto LinkedIn and see if you see that same person and their title, what their background. Does it kind of jive with what you're hearing on the phone and what you're reading in the email? It's very, very important. Do your due diligence even if it takes you five or ten extra minutes. You could prevent a breach and save yourself a lot of hassle and a lot of money.

    Andy Green: Right. So, would a small business like yours be held liable if you don't do that due diligence, or does the credit card company protect you if you do the due diligence and then there turns out to be a fraudulent charge?

    Scott Schober: Great question. Unfortunately, the laws greatly protect the buyer, the consumer. There's a lot less laws in place to protect the business owner. And I found that out the hard way, in some cases, in talking to other business owners. Really hard to get your money back, where the second that there's a dispute, that money comes out of the account and goes into an account between the two parties till it can actually be settled or arbitrated.

    And it's usually a series, you each have two shots of writing a letter and trying to show your case, so on and so forth. In a case where I had been given fraudulent stolen credit cards from somebody that actually had a lawnmower shop, in that particular case, the money went out of our account, went into this other account, and I said right away, "Honestly..." I said, "I didn't realize these were fraudulent charges," they immediately went back into the other person's account. So, the person that was compromised fortunately they got their money back and I felt good that small business owner wasn't duped or stuck.

    The problem I had was the fact we shipped the goods and almost lost them. So, we got hit with some shipping bills and things like that, but it was more the lesson I learned that was powerful. Spend that time up front, even if cost you a little bit of money, to save the potential that you're receiving a fraudulent charges. The card companies, the credit card companies that accept it, yes, there are some basic checks that they do. If it's in, like the United States, they'll do is a zip code check or address check, very basic.

    They really don't validate for you a 100% that that card is not compromised. There's not enough checks and balances in place, or security that can say, "Hey." And really, what does it do, the onerous goes back to you, on the business owner. Your name is at the bottom of it, signed, that they can go after your company or you personally, depending upon what your agreement is. And most of the credit card agreements, they can go after you personally if something fraudulent happens. So really be aware what you sign on with your credit card processor.

    Andy Green: Right, right. We talk a lot about what they call the PCI credit card industry DSS, Data Security Standard, which is supposed to put companies that store credit card information at a certain security level. And it's been a little bit controversial or people had issues with the standard, I guess vendors. I was wondering if you had any thoughts on that standard? Is that something that you have implemented or you don't store credit card numbers and it's not an issue for you or...?

    Scott Schober: I think it's an issue for everyone because to some degree everybody has credit card storage for a period of time. And be it on premise, be it physical, be it a receipt. What we have done beyond what the standard mandate says, we do shred with micro shredder old documents. So, a customer will call me up a week later, a month later, a year later, and I'm gonna say, "I'm sorry, I need to get your credit card again." We do it over the phone, traditionally. We say, "Do not email us. Do not fax us your credit card," even though many people like to do that, there's risks on many fronts obviously why you should not do that.

    A lot of companies also, you have to keep in mind, it's important to realize that they're storing a lot of their information in the cloud. Claim to be secure, claim to be encrypted, it's a remote server. I always ask the question, "Do you know where the physical location of that server is?" And most people say, "No." "Do you realize that there is redundancy and backup of that?" "Well, no." "And do you realize that somewhere in the process that data may not all be encrypted, as they say?" "No, I didn't realize that." So, to me, I'm very, very cautious. What we do use is for online commerce store, none of the employees within my organization ever see the credit card.

    And that allows some transparency and, I think, some security. So, you keep it out of our hands, they can buy online. We never are in possession of their physical credit card, or expiration date, or links to their account. And that, I think, is important that you can keep that level of security, and it actually helps customers. I've had a couple customers say, "You know what, you guys do it right. I can just go online and buy it. There's no extra cost or this or that. It's simple to purchase on your store, and I know nobody's holding that credit card." I say, "Great."

    Andy Green: Right, and that's a very typical solution to go to a processor like that.

    Scott Schober: Exactly.

    Andy Green: Although some of them have been hacked, and...

    Scott Schober: True, true, that is very true.

    Andy Green: But, yeah, that is a very typical solution. And then I... Reading your book, going back to your book, "Hacked Again," there's a series of hacks. I guess it started out sort of with credit cards, but over the years you also experienced a DDoS attack. So, I was wondering if you can tell us a little bit about that. It sounds like one of the earlier ones, and just how you became suspicious and how your ISP responded?

    Scott Schober: Yeah, that's an interesting one. And again, I think especially in light of just what happened the other week, a lot more people can understand what in the world that acronym, DDoS means. And we learned it firsthand awhile back, and so the pain of it... Having an online commerce store that in the past few years we've grown... And we'll typically do maybe $40,000 to $50,000 in commerce per month on our online store, so it's an important piece of revenue for a small business. When you start to find that your store is very spotty and having problems, and people cannot buy, and it's not one or two people, but you start getting the phone calls, "Hey, I can't process an order. I can't access your store. I'm being denied. Is there something wrong?" "Gee, that's funny. Let me try. Wait a second, what's wrong. Let's call the ISP, let's call..." And we started digging in and finding out there's waves of periods over a time that we've been out. None of these were prolonged, wasn't like we were out for an entire week. There's short burst of an hour at a time, perhaps, that we've been out.

    What we did was we got actually some monitoring hardware in place so we can actually look at the traffic and look at the specific content, payload that is sent. And sure enough, classic DDoS attack by analyzing the garbage coming over. So, I always encourage companies, if you are having problems, number one, contact your ISP. They can do some analysis. If you may have to go above and beyond that if the problem keeps happening... We eventually had to change everything that we did, unfortunately from our website, or our host, our ISP. We have a dedicated server now with hardware at the server. We have hardware here before our firewall as well. Again, layers of security, that starts to minimize all the problems. And ironically, we actually receive a lot more DDoS attacks now than we ever did, but we're actually blocking them, that's the good news.

    Andy Green: Actually, your servers are on premises and...or you're using them...?

    Scott Schober: It's not here physically in our building, but we have a dedicated server, as opposed to most companies, it's usually shared. What starts to happen is you start to now inherit some of the problems that others on your server have. And sometimes the hackers use that as backdoor to have access to you, by getting through what the other guys have. So better to just have a dedicated server, pay the extra money.

    Andy Green: Okay, that's right.

    The Psyche of Data

    The Psyche of Data

    With data as the new oil, we’ve seen how different companies responded. From meeting new data privacy compliance obligations to combining multiple data anonymized points to reveal an individual’s identity – it all speaks to how companies are leveraging data as a business strategy. Consumers and companies alike are awakening to data’s possibilities and we’re only beginning to understand the psyche and power of data.

    Tool of the Week: Zorp

    Panelists: Cindy Ng, Kilian Englert, Mike Buckbee

    More Scout Brody: Bringing Design Thinking to IoT

    More Scout Brody: Bringing Design Thinking to IoT

    By now, we’ve all seen the wildly popular internet of things devices flourish in pop culture, holding much promise and potential for improving our lives. One aspect that we haven’t seen are IoT devices that not connected to the internet.

    In our follow-up discussion, this was the vision Simply Secure's executive director Scout Brody advocates, as current IoT devices don’t have a strong foundation in security.

    She points out that we should consider why putting a full internet stack on a new IoT device will help an actual user as well as the benefits of bringing design thinking when creating IoT devices.

    Transcript

    Cindy Ng: I also really liked your idea of building smart devices, IoT devices, that aren't connected to the internet. Can you elaborate more?

    Scout Brody: Yes, you know, I like to say, when I'm talking to friends and family about the internet, there are a lot of really interesting, shiny-looking gadgets out there. But as someone who has a background in doing computer security, and also someone who has a background in developing production software in the tech industry, I'm very wary of devices that might live in my home and be connected to the internet. I should say, low power devices, or smaller devices, IoT devices that might be connected to the internet.

    And that's because the landscape of security is so underdeveloped. We think about where...I like to draw a parallel between the Internet of Things today and desktop computers in the mid-90s. When desktop computers started going online in the 90s, we had all sorts of problems because the operating systems and the applications that ran on those machines were not designed to be networked. They were not designed, ultimately, with a threat model that involved an attacker trying to probe them constantly in an automated fashion from all directions. And it took the software industry, you know, a couple of decades, really, to get up to speed and to really harden those systems and craft them in a way that they would be resilient to attackers.

    And I think that based on the botnet activity that we've seen in just the past year, it's really obvious that a lot of the IoT systems that are around the internet full-time today, are not hardened in the way that they need to be to be resilient against automated attacks. And I think that with IoT systems, it's even scarier than a desktop, or a laptop, or a mobile phone because of the sort of inevitable progression toward intimacy of devices.

    We look at the history of computing. We started out with these mainframe devices or these massive god awful things that lived in the basement of the great universities in this country. And we progressed from those devices through mainframes and, you know, industry through personal computers and now the mobile phones. With each step, these devices have become more integrated into our lives. They have access to more of our personal data and have become ever more important to our sort of daily existence. And IoT really takes us to the next step. It brings these devices not just into our home, but into our kitchens and into our bathrooms, and into our bedrooms, and our living rooms with our children. And the data they have access to is really, frankly, scary. And the idea of exposing that data, exposing that level of intimacy, intimate interaction with our lives, to the internet without the hardening that it deserves, is just really scary. So, that's, you know, a bit of a soapbox, but I'm just very cautious about bringing such devices into my home.

    However, I see some benefits. I mean, there are certainly...I think that a lot of the devices that are being marketed today with computer smarts in them are, frankly, ridiculous. There are ways that we could, sort of, try and mediate their access or mediate a hacker's access to them, such that they were a little less scary. One way to do that is, as you mentioned, and as we discussed before, to not have them be just online. You know, have things be networked via less powerful protocols like Bluetooth low energy, or something like that. That poses challenges when it comes to updating software or having, you know, firmware or software on a device, or having a device being able to communicate to the outside world. If we want to be able to turn our light bulb on the back porch on from our phone when we're 100 miles away, it's difficult. More difficult if the light bulb is only really connected to the rest of our house by Bluetooth, but it's still possible. And I think that's something that we need to explore.

    Cindy Ng: Do you think that's where design comes in where, okay, well, now we've created all these IoT devices and we haven't incorporated privacy and security methodologies and concepts in it, but can we...it sounds like we're scrambling to fix things...are we able to bring design thinking, a terminology that's often used in that space, into fixing and improving how we're connecting the device with the data with security and privacy?

    Scout Brody: I think so. I mean, I think what's happening today...the sort of, our environment we're in now, people are saying, "Oh, I'm supposed to have smart devices. I want to ship smart devices and sell smart devices because this is a new market. And so, what I'm going to do is, I'm going to take my thermostat, and also my television, and also my light bulb, and also my refrigerator, and also my washer-dryer, and I'm going to just put a full internet stack in them and I'm going to throw them out on the big, bad, internet." Without really stopping to think, what are the needs that actual people have in networking these devices? Like, what are the things that people actually want to be able to do with these devices? How is putting these devices online going to actually improve the lives of the people who buy them? How can we take these devices and make their increased functionality more than just a sales pitch gimmick and really turn this into something that's useful, and usable, and advances their experience?

    And I think that we, frankly, need more user research into IoT. We need to understand better what are the needs that people have in their real lives. Say, you want to make a smart fridge. How many people, you know, would benefit from a smart fridge? What are the ways that they would benefit? Who are the people that would benefit? What would that really look like? And based on the actual need, then try and figure out how to...and here's where we sort of switched the security perspective, how do I minimize access? How do I minimize the damage that can be done if this machine is attacked while still meeting the needs that the humans actually have? Is there a way to provide the functionality that I actually know that humans want, that the human people need, without just throwing it on the internet willy-nilly.

    And I think the challenge there is that, you know, we're in an environment where IoT devices...that the environment is very competitive and everyone is trying to do, sort of, the early mover trying to get their device on the market as soon as possible. We see a lot of startups. We see a lot of companies that don't have any security people. I know we have, sort of, one or two designers who don't have the opportunity to really go in and do research and understand the actual needs of users. And I think, unfortunately, that's backwards. And until that gets rectified, and you see companies both exploring what it is that people actually will benefit from, and how to provide that in a way that minimizes access, I think that I will continue to be pretty skeptical about putting such devices in my own home.

    Cindy Ng: And, so we've spent some time talking about design concepts, and security, and merging them together. How can someone get started? How do they start looking for a UX designer? Is that something that Simply Secure, the nonprofit that you're a part of, can you help in any way?

    Scout Brody: Yeah. So, that is actually, kind of, exactly what Simply Secure has set out to do as a nonprofit organization. You know, we recognize that it's important to have this partnership between design and security in order to come up with products that actually meet the needs of people while also keeping them secure and keeping their data protected. And so, Simply Secure works both in a sort of information sharing capacity. We try to, sort of, build a sense of community among designers who are interested in security and privacy topics as well as developers and security folks who are interested in learning more about design. We try to be sort of a community resource. We, on our blog, and our very small but slowly growing GitHub repository, try to share resources that both designers and software developers can use to try and explore and expand their understanding at the intersection of security and design.

    We actually, as an organization, do ourselves what we call open research and consulting. And the idea here is that an organization, and it can be any organization, either a small nonprofit consortium organization, in which case, you know, we work with them potentially pro bono. Or, a large for-profit tech company, or a startup, in which case we would, you know, try to figure out some sort of consulting arrangement. But we work with these organizations to help them go through a design process that is simultaneously integrated with their security and privacy process as well. And since we are a nonprofit, we don't just do, sort of, traditional consulting where we go in, do UX research and then come out, you know, with a design that will help the company. We also go through a process of open sourcing that research in such a way that it will benefit the community as a whole. And so the idea here is that by engaging with us, and sort of working with us to come up with a design or research problem...a problem that an organization is having with their software project, they will not only be solving their problem but also be contributing to the community and the advancements of this work as a whole.

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io