The History of How I Got Here (Continued)
InMarket Prospects didn't use first-party data from a website pixel. They were unlike HashTargetr's data. Instead, it relied on keywords from traffic to other websites. It was unorthodox. We had a strategic partner who owned a DSP demand-side platform. Let’s say we had a mortgage client. They wanted an audience of people visiting BankRate.com. Or an eyeglass company. They want to target visitors to specific Walmart.com pages, like eyeglass shoppers. Since these websites all ran Google display ads on their pages, we could buy ads on those pages. This was the straightforward approach. But you can’t always get competitive ads to run at acceptable CPMs or sometimes get your ads accepted. If you did, you’d be paying a massive premium. So, the DSP would run PSA's public service announcements. Like don’t do drugs, stuff like that would always get accepted and for the lowest prices allowed. DSPs get extra data that advertisers don’t get to see for attribution confirmation. The DSP needs to know which page the ads showed. They also get to see the keywords in the URL. This is if the visitor used a search term to click on the website. We could use our identity resolution technology to enhance those audiences. Then, we could match them to a hashed email. Then, we could use the audience in all the major ad platforms, as with HashTargetr.
From 2017 to 2018, I sold Walmart's audiences back to them through a contract with ZEISS, the lens company. ZEISS ran the eyewear centers in Walmart. ZEISS wanted its own website to target Walmart.com shoppers in a very specific manner. They wanted to separate Walmart shoppers from independent eyewear brand shoppers. I showed them how we could do it. I even showed the Walmart rep how we do it. Walmart had to approve the contract. They were paying for the campaign with the co-op budget through ZEISS. We got a six-month contract. We got a renewed contract one year before Walmart decided to stop using Google. They brought their ad inventory in-house.
After much testing, we found that these audiences were more sizzle than steak. The whole audience engineering process was very labor-intensive for the AdOps departments. It wasn't automated. Plus, the audience identification, scoring, and segmenting needed to be real-time.
We started understanding Google and Facebook’s LALA algorithm
and realized we needed to create real-time audiences.
Real-time audience creation became our direction.
Earlier, we built a private bulk email and email retargeting platform for a client. AgencyXLR extended that system for our use. The email retargeting pixel worked in conjunction with our HashTargetr pixel. It allowed us to send a website visitor an email (B2B) without the visitor opting in. We matched the person to our identity graph. We would send an email on behalf of a client, but we would not disclose the clear text email address to the client for privacy reasons to the visitors. We needed to learn the pitfalls of inaccurate emails in this endeavor. Think about how many email addresses you have, including the older emails. The challenge was the cost of verification and data hygiene. We added multiple data layers for each visitor. It turned out to be more sizzle than steak in the short run. We didn't see the campaign lifts we hoped for. At the time, nobody else offered a similar service to test. In hindsight, with the right product line and long-term plan, email retargeting could be a key part of a marketing stack. We never got there. We were heading towards a real-time audience world with zero PII. More to come with IDENTYO.
Postal retargeting was my white whale for over a year. Postcardable was a postal retargeting service we built. We connected our HashTargetr pixel to several postal databases for real-time identity resolution. When someone visited a website, we matched them to many identity graphs. We also matched them to postal databases. These include the National Change of Address (NCOA) database. Simple enough, right? Not so easy. We’d test by matching to a Canary file. It had about 2,000 customer records and included friends and family. We included them because we knew everyone’s exact, verifiable data in the file. We went to every big data company to test this process. I’m talking of the gold standard of data companies—the best of the best. We were lucky if we got a 30%-40% accurate match to our Canary file. It seemed impossible to get an accurate match at one point. Here are the many variables to consider. We started with a hashed email address. There are many assumptions; however, the one certainty is that the hashed email can be unlocked with only one key. The key is the exact clear text email address. The clear text email address is guaranteed to be accurate. Most identity graphs we've worked with have a problem. Those graphs don't keep their email data current on usage recency. They are uncertain about the last time the email was used. We know this from our testing with AgencyXLR. Many of the hashed emails may be old or not primary ones. Let's assume all the emails are good current and primary emails. Assuming the NCOA (National Change of Address) database is accurate, you'd be way off too. We know this because we tested the accuracy for a year. The challenge is that every single big data firm uses NCOA. Then I had an idea. Append the clear text email address to the top five databases, not just one. Almost 100% of the time, all five databases would not match across the board. We’d take a match with at least three of the five matching postal addresses. We’d then append the clear text email address to devices with MAIDs Mobile Ad IDs. Then, we’d buy the longitude and latitude with dwell time on a mobile platform. If the longitude and latitude with dwell time matched the postal address, we knew we had a match. Ok, we did it. We figured out how to get an accurate match for postal retargeting from our pixel. Next problem. Cost. Our solution priced out all of our SMB clients. Building a solution made me think about data accuracy within the big data industry. I am sharing my experiences here for a reason. They hopefully provide insight into the complexities of creating an omnichannel marketing campaign.
Fresh Leads Again was a service we built to reanimate old leads. Clients would upload their old leads. Our system would tell them when their prospects are back in the market for what they sold. Again, we learned. This is great when executed, not partially. We built it because we had clients who wanted us to build it. This type of data tech works better when connected to a marketing process that includes multichannel messaging. We’d ping the client's data to provide timely insight. But, they wouldn’t follow up properly because they used systems that weren’t built to handle it a smarter, multichannel follow-up. We'd hear, "We called them and left a message".
We were always at the drawing board at intersections like this one.
Data Enricher was a real-time data appending platform. It was by far our most successful platform. It ran for about four years, and we had contracts with a dozen big data APIs feeding the appending process. It was a simple and clean process of appending data. Clients could even upload a single record to append. Clients either upload data or use our API to power their applications. Like our other systems, the platform arose from pure necessity. Our clients needed a cheap, reliable source to add data to their workflows and tech stacks.
The death knell for Data Enricher was the Covid lockdown. All the systems were self-funded or funded by a client. The drop in business made the monthly cost of the big data APIs too high to maintain. It was too high without a steady flow of client business. Lessons learned on many levels.
Connect With Gil On Linkedin
ABOUT GIL ORTEGA
For over 30 years, Gil has earned the esteemed moniker of "The Chief Rainmaker" due to his renowned expertise as a Customer Acquisition Specialist.