VerdigrisMachine Learning Feature
Interaction Design for customers to give feedback in the Verdigris Energy Tracker app on the accuracy of our algorithms
(Def. “automated anomaly-detection”: the product sends an alert to the user if energy activity happens outside normal behavior patterns.)
For example, if there was suddenly a rush of people into the building for a conference, that would drive an unusually high demand for energy across the HVAC system. Our algorithm would catch that as an anomaly, but our user wouldn’t see it that way.
The ML team wanted to know, was there any way for them to get “the ground truth”?
Co-created Flow Chart: I made the first draft and then the Web Dev and Machine Learning teams added, removed, and rearranged components to fit their understanding of how to build the feature.
You see, we had heard this in interviews with building operators at other times.
They know by the sound that a motor makes when something is off, by the feel of a door swing that the ventilation and building pressure isn’t right. One building engineer told me that he looked at a report of yesterday’s building energy every morning for anomalies. When I asked how he knew he’d found an anomaly he answered, “I just know. I’ve developed a gut instinct for it.”
Perhaps, THERE WAS NO GROUND TRUTH!
Whether the engineer’s gut instinct was right or wrong wasn’t as important as allowing him to TELL US that gut instinct, and to have our product align with that.
Final email design for the notification with prompt
Versions of the feedback mechanism
Pixate prototype used for gathering feedback and communicating interaction to devs
1. It had to be obvious that this alert was created by our algorithm, different from alerts that the user specifically created in the setpoint tracker.
2. We wanted users to know that they could give us feedback on the notification, whether it was to confirm the algorithm or say it was wrong.
3. But at the same time, feedback prompt should not distract from the main content of the alert message.