News of the potential of generative artificial intelligence (AI) to produce anything from imagery, code, or copy speedily and at scale has swamped our feeds. However, almost as quickly as our collective excitement peaked, so too have doubts concerning the ethical cost of such untethered and unguided progress.
Fake images have repeatedly gone viral and fooled social media users, ads served by Google are reported to actively fund fake news, and Twitterâs recent decision to change its synonymous âblue checkâ from a mark of authenticity to a paid subscription has seemingly given undeserved credibility to dangerous sources. In fact, as recently as March 2023, 25 misinformation superspreader accounts benefitted from this updated approach. At the same time, the winter wave of tech layoffs has continued through to spring, with redundancies en masse obscuring the promised path into a prosperous, virtual future.
If we look at the advertising ecosystem underneath it all â the lifeblood of most modern media â the difficulties continue. Lack of transparency is a challenge across the board: for consumers who are concerned about sharing their data; for advertisers whose campaigns do not attain their objectives; for brands whose ad spend is wasted; and for creatives who are often not paid for their work. It is estimated that more than 2.5 billion images are stolen and more than 3.2 billion shared every day, so how can we discern the real from the fake, the safe from the sordid?
The limits of AI
With a growing range of challenges ahead, it seems many are looking to the past for solutions, and for good reason. Growing at a CAGR of 13.3% and with a global market value estimated to reach $335.1 billion by 2026, contextual advertising has the potential to provide relevant content without honing in on individual browsing behaviors â making it an attractive, privacy-compliant proposition for brands.
Many see AI and machine learning as the key to driving contextual advertising at scale, but itâs clear the technology still has ample issues to contend with. Despite recent advancements, inbuilt bias, deepfakes, and facial recognition issues still result in unfavorable ad placements or missed opportunities. In fact, while itâs frighteningly easy to build a realistic face, a variety of factors can confuse an algorithm, whether aging, a pose, illumination type or emotions.
Words and topics are inherently ambiguous and as a result, are often overzealously blocked by brand safety technologies. Double entendres and natural disasters cause confusion, while minority media and sensitive topics such as sexuality, race, and gender â which often already have a representation deficit in the media â are avoided altogether.
More often than not, calibrating a brandâs suitability and safety strategy with limited technology can impact both reach and scale. And since these are two core priorities across the industry, poor contextual targeting has, understandably, emerged as a key concern as the industry moves away from using third-party cookies.
Whatâs next for image safety
Publishers and advertisers need to evaluate whether the systems they have opted for are as effective as they could be, and how the industry can progress into a future where optimal brand safety doesnât compromise accuracy.
We need not look far for a solution: music (YouTube) and TV streaming has created a clearer view of the supply chain in those industries and significantly slowed down intellectual property theft. Similarly, technology designed to stream images across the web can plug these same gaps. Rather than being directly uploaded to a website, a streamed image is held in a secure, password-protected central server, creating an embedded image that is displayed on a webpage. Powerful metadata, security features, and interactive controls block any illegal sharing, prohibit tampering, boost engagement, and enable image tracking and analytics. Not only does this ensure an ethical imagery supply chain for imagery, it protects content owners and supports brand safety to be a top priority, thus empowering a more transparent and fair media ecosystem.
Image owners and creators themselves provide the details of the image content â as well as captions and attribution â elements that stay with the image throughout its lifecycle, delivering crucial transparency. The combination of this metadata with AI enhances the potential for high-quality contextual targeting, and as it enables in-image advertising â whereby an ad temporarily replaces an image within an article when it appears in the userâs viewport â the tech can also guarantee high-visibility placement.
Building back better
What the industry needs is to regain the trust and confidence of their audiences â and this can only be achieved through a transparent media ecosystem. The difficulties of the digitized world are many and images are at the crux of the issue. Brands, publishers, and image owners must be able to track and protect their images, while at the same time measuring how users interact with image content, if we are to create and maintain a safer media ecosystem.
Coupling innovative technology such as image streaming with contextual targeting can achieve the relevant, brand-safe placements brands dream of, offering new models of monetization, and eliminating the need for expensive and ineffective systems that, more often than not, do not succeed in tracing stolen images. A revolution is around the corner, where we can both future-proof the industry and create opportunities across the media and image supply chain, benefitting users, image owners, advertisers, brands, and publishers alike.
Rob Sewell is the chief executive officer of SmartFrame Technologies.
Whatâs your thought on this story?
Kindly like and share your thought now.