Marc Goldberg of TrustMetrics Cult/Tech Podcast Transcript

Listen to the podcast on SoundCloud: https://soundcloud.com/redcupagency/culttech-podcast-with-marc-goldberg-of-trustmetrics

Lee Schneider: It’s the Cult/Tech Podcast, I’m Lee Schneider. Cyber security is on everyone’s mind now. We have hate sites and fake sites. What we click on matters more than ever right now. Marc Goldberg, chief executive of Trust Metrics, a brand safety planning tool, is joining me today on the podcast. He will explain what the term brand safety means and we will talk about  how he is keeping an eye on the bad actors. Hey, Marc, welcome to the podcast.

[00:00:30]

Marc Goldberg: Hey, thanks for having me, Lee.

Lee: So you’re a brand safety guy, it looks like your time has come, given the state of the online world. First of all, tell us what brand safety means.

[00:01:10] 

Marc: Brand safety is when advertisers really want to run in safe environments and make sure that their ads are seen in environments that are representative of what their brands, integrity and brand promises. What we have seen offline and online, that, you know, when advertisers are running against,  let’s just say, Howard Stern or porn, they sometimes feel that wasn’t my intention. So we keep people away from the bad stuff in terms of their mind. Each brand has their own specific brand criteria and we, at the end the day, help identify the domains and apps that would make it a safe environment for when they do want to advertise.

Lee: How did you land here? Give me a sense of your background and expertise in all this.

[00:01:30]

Marc: Sure. So, I mean, I started on the agency side with all great intentions of putting advertisers in front of good environments. I started in the 90s, we didn’t have really the internet until later in the part. I mean I got frustrated with TV vendors telling me that, you know, “Oh, yeah, your ad saw a 30 rating and the reality is we’re building off of the main suite numbers and then we were running reruns in the summer.” And I just felt like, “Come on, that’s not right. There’s no good intentions.” In digital world, you can actually have impressions load and that’s what you pay for. So, you know, I thought I made the jump into digital where I thought it would be a great opportunity,  to do the right thing.

[00:02:00] 

Then I start seeing a lot of bad things happen in digital and a lot of bad practices and it just got me frustrated that, you know, this is the promise that we were given for digital to have the right ad in the right place and have real users and understand that these users are being marketed to in a nice way. Sometimes it isn’t true, and I just really got, for lack of better word, angry and frustrated. I really got loud and proud about being an advocate for brand safety and then I took this role and this is what I do on a day-to-day basis.

[00:02:30]

Lee: Yeah. A lot of people have written about how the initial optimistic promise of the internet, really the Arpanet, which was created by scientists to share information. That initial promise has gone a little sour. There’s a lot of bad actors, there’s a lot of crazy stuff going on. Let’s take one example to start, what is ad fraud? Can you give us some examples of where to find it and what are we looking for?

[00:03:00]

Marc: Well, I mean that’s a big term, it’s everywhere. You’re seeing ad fraud happen in mobile, desktop, video, apps. The bad guys really created a lot of ways to take advertising revenue and you’re hearing a couple of different things around an ad fraud and I kind of want to break it down to, in the word of bots. So the bots are non-human traffic. These bots are either a malware that’s on your machine and that mission malware is doing stuff like pretending to go to websites that exist or don’t exist out there and having those websites load pages which have banners on them, which are generating revenue and those revenue is coming from advertisers. Or you have actually headless browsers out of data centers that are just doing repetitive tasks and all of these tasks are designed to do one thing, look and smell like a human.

[00:03:30]

Whether they’re trying to do a DDoS attack or trying to harvest emails, that’s one. They want to get that information or do take that task, but what they’re really trying to do is go to a website and look human to beat all  the detection services. Ad fraud is happening in many different ways within these different categories, but ultimately, is the same thing. They are trying to create someone to look like a human and therefore they’re going to try to take the revenue from the advertiser.

[00:04:20]

Lee: Now, I have to break in here a moment because as I was doing the post production on this, I realized that this is pretty complicated stuff and it kind of needed footnotes, but we don’t really have footnotes in  a podcast. So I’ve decided to go with an experiment and here it is. You’re going to hear a little sound like this and that means it’s a footnote. It’s an explanation about something Marc and I are talking about, that I thought a kind of a pause was needed. And then when I finish it, you’ll hear this sound. So part of those interruptions, you can ignore them if you like, but it’s my way of kind of deepening this conversation because I found that it needed a bit more than what we could provide in the flow of our back and forth.

[00:05:00]
 
Lee DDoS is the Distributed Denial of Service, just so people probably know that term, but what it means basically is an overload right there. You’re getting all these bots just basically take down a server.

Marc: Yep. There’s a little bit of the bots that kind of get misrepresented in the marketplace. Googlebot is actually looking and trying to look at your page and determine what this page is, so they can actually better represent you in their search engines. There’s a lot of good bots out there, they declare. The ones that don’t declare or try to skip around the service are the ones that are bad and they’re doing many different things. One of the things you’ll hear, either today and continue to hear, is even good buy sites have bot activity. The example I’ll give is what happens are these bots decide to create profiles and the profiles they want to look like, an affluent male who buys things.

[00:06:00]

Lee: Cookies are small files stored on your computer. They’re designed to hold a little bit of data specific to a client and website.

Marc: If you think about the profile, it has…now they have a valuable user to some advertiser and that cookie is followed in re-targeting which is a common way to buy media.

[00:06:30]

Lee: What is re-targeting, also known as re-marketing? This is a form of online advertising. It’s targeted to consumers, you and me, based on our previous internet actions. If we visit a website, we may see an ad for that website on some other platform like Facebook. That’s re-targeting. It’s a fake site, but why put up that site? I want to know what that fake publisher, that bad actor is getting out of it.

Marc: Yeah, well, they’re getting advertising dollars, so they put in an ad code on this page and it opens it up to allowing them to  get advertising dollars. The ad code is built into these ad exchanges, which is the way people are buying more and more, which is programmatically.

[00:07:00]

Lee: What is programmatic ad buying? Well, automated ad buying technology allows marketers to use a computerized trading system like the kind you’d find on Wall Street and they place bids for ad space across the Internet and into some parts of traditional media. There’s a demand side and supply side with the middlemen being the people who are connecting the dots, these programmatic ad buying services.

[00:07:30]

Marc: The dot com itself, “fake.com,” isn’t doing a whole lot of work other than put it, keeping it up and getting traffic there. The machines that are trying to figure out that that’s the target that’s place, are seeing this cookie and says, “Oh, I like that user, I want to buy that user, I’m going to bid for that user and I’m going to serve an Ad to that user.” Even though, it’s not a user.

[00:08:00]

Lee: Let me replay this back to you so I  understand it and be sure that the listeners understand it. A cookie, a cookie is something on a website that collects information about me, sometimes with my consent, sometimes not, and it tells…the cookie knows how where I ‘m logging in from and maybe what kind of computer and it starts building a profile on me. Sometimes it’s very detailed, right? Sometimes it’s a white male in Southern California of a certain age.

[00:08:30]

Then the trick is, for a fraud site, crap.com , to build a fake person, a fake cookie, that later these automated ad services will say, “Hey, that looks like a good person.” Even though it’s a fake person. “We’re going to throw some money at that or we’re going to throw pennies or dollars at that.” And the fake avatar, you know, the fake site gets some bucks from that and they do it…they sort of set up shop, do it as long as they can. When they get detected, they close down shop and reformat it into something else. Is that a pretty good restatement?

[00:09:00]

Marc: Very good. They know how to target where the money is. The motivation for crap.com, fake.com is to make money and crap.com, fake.com just doesn’t have one site, this is done at scale. These guys are doing thousands of websites. So if it’s fake.com to fakeone.com to faketwo.com to fake a thousand.com, they are building it and getting it into the advertising ecosystem. And when they make money , they continue to build more. When it stops making money, they don’t care.

[00:09:30]
 
Lee: Yeah, sure.

Marc: They turn it off, they wait a while. One of the great examples I like to do in my presentations to clients is walk them through a specific lifestyle site, I’m not going to name the name. It’s harmless, you look at it, it doesn’t look bad. I’d be okay if my ads ran here. It almost looks like a nice mommy blogger with a lot more gaps. You look at this site, it has food as a channel, coupons as a channel, women’s health as  a channel, exercise as a channel. When you look into the site it’s really not anything special, but it really looks okay. It’s doing millions of impressions and your ad forward just access service says, “Wait a minute, it’s serving bots, blacklisted.”

[00:10:00]

Soon as it gets blacklisted, if the bad guy realize that it’s being blacklisted, they will take that same site, now divide it into four new sites, they’ll take the women’s health and they’ll take each one of those channels [00:10:30] and build it into four new sites. Then after that gets caught, then they go into, take into individual articles and mixing and matching. They just take all of the same more or less content and just continue to re-spin it, refine it, move it around and continue to beat the bot detection services by sending more and more traffic to them, to these sites.

[00:11:00]

Lee: What is a blacklist? In computing, a blacklist or blocklist is an access control mechanism. It allows through all elements like email addresses, users’ passwords, URLs, IP addresses and domain names, except those that are explicitly mentioned and therefore blocked. More in the news for fake sites and fraud sites is fake news sites. Why do you think it’s been difficult to detect fake news sites? Is it plain old negligence and lack of responsibility on, say Facebook’s part, or is there really a technical explanation for why so many of these sites get put up and don’t get taken down?

[00:11:30]

Marc: All the other detection services are looking for viewable inventory and viewability is just, it’s my ad in the view. All those guys are looking for that and these big news sites check their box. They’re viewable because if you’ve ever gone to these sites, you’ve seen that there’s 10 ads above the fold, they don’t care, not a good user experience. They also check the box for being safe. Is it nudity, is it porn? No. And so what we have been doing is we actually have been looking at quality from the beginning of time. We determine quality [00:12:00] with the presence of good publishing features like, you know, ad edit and we do look at the words on a page and then we have been looking at individual websites with both crawler based and human technology, and humans.

When you start looking at these news sites, you see patterns, you see no one really writes in all caps, no one really does tone for a news site like this and you can tell these sites are fake. None of the other guys were touching it. Does it relate to Facebook? Facebook, you know, they came out quick with, Zuckerberg said, “We didn’t influence news.” I’m still  unsure of what, if I believe that. I have people on my news feeds to still assign fitness articles. They’ve done a good job of creating some user tools. I think they’ve done a good job of trying to create a fact checking organizations partnerships. Twitter, I would call a hot mess.

Lee: Yeah.

Marc: There are bot armies there that just try to get things trending. When you get to that trend which is a political trend generally, you can see links to sites and those sites are going to fake news sites. So what the Twitter bot armies are doing is getting it to [00:13:00] trend so you and I, humans, look at these sites because we’re curious and we click and we go and we’re now a human getting on a fake news site. I think Google, again, did the best job here. They, I think came out in October and added fact check label to any dubious stories. They were second by the way, we were first doing this.

[00:13:30] 

Lee: We know that there are political motivations for doing fake news sites and to get things trending and then send people off to fake news. I’ve had personal experience with it and people have come up to me, cornered me in a restaurant and sort of blathering fake news right at me and saying, “Just check Facebook, you’ll see it’s true.” So I know this. Do people put these sites up just to be evil or how are they profiting from them, ad dollars? And if that’s the case, is it effective when Google says, “Oh, we’re not running ads on that fake news site anymore.”

Marc: I don’t know the answer to the motivation other than money. There could be some propaganda [00:14:00] and reasons to a candidate or a situation in a different light. I bet you, yes, there are, sure, there are some sites that do that and some people that are doing that, but really, this is becoming anti ecosystem issue where now that there’s all these detection services getting bots, well, we need humans, “Oh, my God, look at how easy this is to get a human. Let’s just make some fake news articles or some fake celebrity articles.” I don’t know if you or the audience has seen the NPR article and interview, but they did an interview with a guy who  had been creating fake news sites for a living.

[00:14:30]

When that happened, he got hundreds of calls from other networks. So these guys, if we’re going to choke off the revenue, we’ve got to do it collectively and if advertisers want to care about it, that’s the real only way. Fake news is just a byproduct of fake publishing, fake publishing is tricks to get people to come and see that, you know, Justin Bieber died or something and you click and you land, no he didn’t die, he just died on stage [00:15:00] or something silly like that. These are just clickbait tricks. When you say what’s their motivation, I would say yes, it’s money. There are probably still some that are doing this for political reasons as well, but we’ll probably find that out in the future, I don’t know.

[00:15:30]

Lee: What’s interesting about this to me is this whole click bait trick has become amped up, steroided up into, you get click baited into an ecosystem of fake news. That seems like a different order of magnitude to me. We’re talking about something bigger because you can look at, I think it was ProPublica or a site that showed me red state, blue state, Facebook and Twitter feeds, and it’s different realities, different ecosystems. It’s, this dream of the internet connecting everyone and everyone is going to have one big conversation, is I think not viable any longer. I think people are having small [00:16:00] town conversations and bubble conversations and echo chamber conversations driven in large part by bots and trending stuff that’s not really trending and fake news, which is kind of a sad state of affairs. Maybe you can cheer me up or maybe you think that I’m right.

Marc: You just depressed me even more because that’s kind of what I’ve been thinking, if it’s a bubble world. I really do feel that we all live in our own little bubbles. I don’t see a lot of the stuff that people are complaining about and they don’t see my world . So it’s a really bubble world, but I think what Facebook has done, if truly you have, you know, these people that you’re friends with are sharing fake news, it’s now at least to put you in a little bit of a advantage. If you’re paying attention to this topic, that now you actually should be thinking about, is this real or is this not real. I think that only will further our education on understanding what’s real.

[00:17:00]

There was an article  the other day that I had to go and google and just make sure it was from a reputable source and see if it was on other sources just because now I’m concerned that fake news is really out of control and it’s going to influence more than just our political. It’s just going to gain traction and the advertisers who are supporting it knowingly or unknowingly, they need to understand that there are ramifications for being adjacent to this. We did a study recently with our two of best working company, less sophisticated readers [00:17:30] and they associate the ad with the content. So if they start to realize that this is fake news or they don’t realize it’s fake news, like what’s the relationship there for a brand to be adjacent to fake news?

Lee: Yeah, which gets into your question, should marketers care about fake news and should they care about running on a site like Breitbart?

Marc: Sleeping Giants is a Twitter handle featured in The New York Times. They are essentially calling out brands running on Breitbart. And [00:18:00] what I love about what they’re doing, and this is, I don’t know who they are. I don’t think anyone knows who they are, but what they’re doing is calling brands out and they’re saying, “Did you know, you’re running on Breitbart.” In programmatic, and we mentioned retargeting earlier, and we mentioned just the way, how ecosystems work earlier, you can run on Breitbart without knowing. But if you know, and would you want to be against that hate, is a legitimate question to a brand. Do people who read Breitbart eat cereal and buy things? They do. Is that an environment you want to be adjacent to if it’s your brand promise is wholesome? Maybe not.

[00:18:30]

So each brand should has, you know, their criteria their lens, I don’t know, the proper marketing speeches, but each brand has an idea of what environments are good or not. I think I mentioned Howard Stern at the beginning, Howard Stern is very extreme. There’s, you know, Rush Limbaugh, is very extreme. There’s a lot of different celebrities who have these extreme brands that advertisers do want to be adjacent to and there are just that many more that don’t want to be. But what I think Sleeping Giants is doing great is just letting people know that they are on it and if they condone hate as a brand, they may want to, we should just all know about it.

[00:19:30]

Lee: The transparency is really what’s the most important. I thought it was a bit of a cop out when Zuckerberg said, “Well, we’re going to put this stuff up and you get to decide whether it’s real or not.” And then he had to walk that back a little bit and then they came up with the idea of the fact checking organizations, some quite prestigious. Now, coming from journalism, I’m used to fact checking, that’s what we do. The argument is being made, and probably a valid one, that we all have to be fact checkers now, and we all have to look at where ads turn up and see the brands associated with those ads and decide, does that look good to me? But we’ve crossed the divide now with things like Breitbart which are essentially the re-branding of Nazism and white supremacism.

[00:20:00]

So, yikes, you know, we’re into, I think we’re into a way new territory here for these bots that are enormously powerful and capable of driving us into an ecosystem where we really wouldn’t want to be if we knew we were there.

[00:20:30] 

Marc: That’s really what it’s about in terms of brand safety, is just knowing where you’re running and what …where you want to run and, you know, aligning those two things and just making sure that you’re not putting your brand at risk. Some advertisers just take a blacklist approach and just don’t want to be on this system mess and that is not the best way to go about it. You actually should be inclusive, not exclusive and I can give you 17,000 different reasons why. But if you don’t actually take any of those, you will run on the worst sites on the internet and you will risk tarnishing the brand.

[00:21:00]

Some advertisers decide that, “It’s a direct response campaign, I don’t need to monitor it, I just need to monitor results.” I have yet to see a study and I would love for someone to do it. The impact to your brand that ran an ad on a hate site or a porn site or right after a porn site, it’s important to remember that you as a brand have some obligations to your customers and knowing your customers and if you want to insult them by promoting hate, go for it. If you don’t, then call me.

[00:21:30]

Lee: Hey, Marc, thanks so much for joining me today on the podcast.

Marc: I had a great time, thank you and thank you for bringing us to the conversations.

Lee: I’m Lee Schneider and this has been the Cult/Tech podcast.