Reach millions of travelers globally with the right message at the right time
Featured resourcesMore resources
Bayesian modelling for predicting winning probabilities of bids in ad auctions
In our previous post on bid optimisation, we concluded with a cliched cliffhanger. Better with something Bayesian, until next time. Click bait from an era before clicks. It was not all just wishful thinking though. Back then we were already working on a Bayesian win price prediction model and having now put it into production, we are in a position to share why we strongly believe it is worth adopting a Bayesian approach for win price prediction in ad auctions.
Generally speaking, Bayesian approaches refer to updating prior beliefs, expressed as some distributions, based on observed evidence to infer current beliefs, expressed as some posterior distributions. Kind of an incremental model update, you’d say? Yes, but the key word here was not updating, it was distributions. Namely, when performing Bayesian win price prediction, one does not predict a single number, or a point estimate, based on the input features, but a probability distribution for the prediction.
Ad auction bidding strategy for several competing performance metrics as a constrained optimisation problem
Here at travel audience we are in the business of adtech, i.e. programmatic online advertising. As a Demand Side Platform (DSP), our algorithms find the optimal audience to target in order to bring value to our clients - the advertisers. Targeting the identified audience with ads proceeds via participating in an online auction, which is triggered every time a user visits a website run by a publisher who wants to monetize the visits. All DSPs participating in the auction submit their bids and the highest bid wins the right to show an ad from a client to the website visitor.
Now, this of course is an extreme simplification of the process every DSP executes in less than a 100 milliseconds for each of the tens of thousands bid requests they receive every second. Does the bid request for given user fit the targeting criteria of the clients? Which ad from which campaign of which advertiser should one pick for this bid request? Which provides most value to the client and how to predict this value? What is the right bid given the expected value to the advertiser and our expected margin?
If you’ve been following our other blog posts at tech.travelaudience.com, you’ll know that we’re focused on running all our apps in Kubernetes. You’ll also notice that we’re big into Helm for packaging our k8s manifests. This post will get into the benefits using Kubernetes for ephemeral environments and how Armador makes use of Helm to create them.
When we started running our apps in Kubernetes we used an “umbrella” chart, which listed each of the microservices as dependencies in one Helm chart. The “umbrella” chart worked because it allowed for using just a single command to install all the services into an environment. But as more apps got released into k8s and they demanded their own release cycle, the umbrella chart was no longer scalable. So we broke it apart, and each app was managed with it’s own CD pipeline.
Developers now had an easy way to deploy their app into staging/production, but what we didn’t have was somewhere to test the full system. A key aspect of a microservice architecture is to make sure the individual services work in isolation, but it’s also important to make sure the service works in the full system. Providing developers a way to run a multi-service environment on their own machine proved to be complicated.
At travel audience we run a microservice-based system on top of Kubernetes clusters. Since Kubernetes pods are working with docker images, changes to our services are built into docker images and pushed to an artifact repository. We have chosen Sonatype-Nexus as our artifact repository manager (you can read more about this decision in a previous post).
travel audience’s Nexus is primarily filled with docker images.When changes are made to our git repositories our extensive CI pipeline is triggered, part of which is building a docker image. That means that any commit in our git repositories will be built and tagged with the change name (ie. SHA code, branch name, PR number or git tag). All these artifacts are then pushed to Nexus to allow our K8s clusters to pull them when needed.
This setup of pushing to Nexus on any change was created in order to allow our developers to test their work. With this approach, they are able to test independently and in integration with other services in our non-production environments.
Do you want to maximize the impact of your advertising campaigns and reach millions of travelers? Please click below to reach out and learn more. A travel specialist from our team will be in touch soon to assist you.