These are the key insights I would like to share through my talk:
- Just as you know your algorithms, be aware of the fuzzy sentiment that brews in the mind of your user, upon being served the wide ranging outcomes that your algorithm can generate. Hint: relentlessly listen to your users.
- In today’s atmosphere of big tech distrust, if your AI product doesn’t explain itself, the narrative is lost. Evil intentions assumed.
- Opening-up the black box, communicating the “why” of algorithmic outcomes that the user has to confront, is the starting point to building trust. Toolkit: Effective design language, operational transparency and speaking to shared motivations and pre-established concepts of fairness and trade-offs
- Don’t be shy, shout out and show off when the algorithms generate exceptional user value. If you don’t sell it, no one will.
- Most importantly, look out for users when they get burnt because of your algorithms, show you care, offer help. Build features to help beat your algo.
- Holy grail: Communicate to inform and shape user behaviour over medium term. Make your users partners in achieving the goals of your product.
1. The importance of humanising AI-driven consumer facing products, and why must this be a pre-requisite to scaling and optimising them. Brought to light with my experiences as a Pricing PM at Grab, when I failed to do so.
2. As more consumer interfaces are being disrupted in SEA with the first-time application of AI, it’s important that PMs and UX designers are aware of the principles of behavioural science and associated user needs in relation to engagement with often unpredictable and “black-box” AI products.
3. Examples of features I subsequently built at Grab to walkthrough the complementary approaches one can use to make AI products more robust in their acceptance to users. Ultimately enabling users to succeed along with and despite the AI algorithms.