This is a consume-at-your-own-pace event, it will take place 100% online. All of the content has been pre-recorded and will be released at its scheduled time. If you can’t listen or watch a session when it’s released, remember that you can access it at ANY TIME in the next 30 days.
- Select your country -
Antigua & Barbuda
Bosnia & Herzegovina
British Indian Ocean Ter
Central African Republic
French Southern Ter
Isle of Man
Netherlands (Holland, Europe)
Papua New Guinea
Republic of Cyprus
Republic of Montenegro
Republic of Serbia
St Pierre & Miquelon
St Vincent & Grenadines
Sao Tome & Principe
Trinidad & Tobago
Turks & Caicos Is
United Arab Emirates
United States of America
Vatican City State
Virgin Islands (Brit)
Virgin Islands (USA)
Wallis & Futana Is
Username or Email
Lost your password?
Following the catastrophic California wildfires in November 2018, PG&E captured more than 2 million images of electric field equipment in high fire-risk areas leveraging both drone technology and traditional inspection methods. We developed a technology solution, “Sherlock Suite” that enables the automation of otherwise manual inspections using cutting-edge artificial intelligence (AI) techniques to reduce cost, increase quality, and enable PG&E to have in-depth knowledge of the state of our equipment with the goal of wildfire risk reduction.
Sherlock allows desktop inspectors to mark up potential equipment problems on high-resolution images while Waldo leverages AI to train computer-vision models to automatically detect potential structural issues or damage to electric system components such as insulators. The project is also adding metadata to these data sets to enable searchability of these images across the enterprise and enable additional use cases.
Feel free to ask questions in the discussion forum below - speakers will be responding as quickly as they can. View this webinar by the end of the day on June 9 and get entered to win a $100 Amazon Gift Card.
Great review gents. Are the labeled datasets available externally? Are you planning to make Waldo available externally?
Thanks Kevin! We do a mix of internal and external labelling, focusing on internal labelling for things that would require special training or knowledge to identify.
As for your second question, I’d be happy to chat more if you’re interested! It’s definitely something we are exploring.
Thank you for your reply, Kunal. I’d love to compare thoughts on Waldo’s use externally. What’s the best way to reach you? Feel free to send me an email to firstname.lastname@example.org
Perfect – just sent you an email 🙂
Also, do you have any specific examples of how you’re correlating the metadata to the image?
Yes, a specific example would be if we found that a structure contains a bird nest, we would store that prediction as metadata for the image. This would be used so that images with potential bird nests can be prioritized and routed to the appropriate channel (in this case our environmental group so we make sure to keep in mind the seasonal nesting cycles when we do our work on those assets). In the future, we also plan to use this metadata for search purposes, so that you can search for “bird nests” rather than having to look up specific… Read more »
Got it, are you also using the metadata collected from the camera, such as location coordinates, UAV (if that’s what you’re using) data from the camera or gimbal, such as heading, altitude, speed, etc.?
We’re using the metadata to display in the inspection application, and it’s also being used for QA of the images. We’re not currently using any of the metadata as a part of any of the models we’ve build yet, but definitely an opportunity there.
Really interesting work here – Kunal and Michael – Are you automating inspectors? How will this affect inspector job loss, and how will you make sure we can trust the AI over human inspections for such a high risk activity?
Hi Steve – great question! We are automating tasks, not people. As mentioned, an inspection consists of a number of smaller tasks – some lower risk, and some higher risk. We are focusing our efforts for full automation on lower risk tasks to begin with. Another thing to mention is the analogy to self-driving car levels 0-5. Right now we are at level 0, and are soon moving to level 1 for a single low risk task (identification of structure id tags). With more data, automation will become possible, but for the foreseeable future, inspectors will remain in the loop,… Read more »
First post. 🙂 Was it necessary for your team to train object detection models for any objects? If so, appx how many labeled images did you find were necessary, and from how many different positions?
Hi Norv – thanks for the question. We generally build object detection models for components prior to classifying wear (detect the thing before we know it’s good or bad). Number of labels and angles depends on the object in question, but generally speaking we use what we can get. In terms of angles, our “shot-sheet” for drone image capture lays out a number of different angles for the structure, and so we end up getting a number of angles for each component. As for number of images, generally a few hundred per component serves well enough for a first version,… Read more »
When curating a data set, we also try to select images from a wide variety of environments. We use the number of distinct transmission lines and structures in our data set as a metric to measure this. We have found that by including images from a wide variety of transmission lines and structures, we are able to have increased variation in our data set which helps the models we build generalize better to unseen images.
Thanks to you both! Our experience is similar which encourages me that we are on the right track. Great presentation today!
Where in the inspection process can computer vision models be used?
Great question David – our plans so far are to include computer vision across image QA, inspection prioritization, inspections themselves (including identification of compliance items, failures, and classification of images such as overview images or images showing the right of way or access paths), inventory, search, and also as inputs into other predictive ML models. There’s probably a lot more that we could do that we haven’t thought of yet – would love to hear your thoughts as well!
Great presentation, folks. Thank you for sharing!
Thanks so much Chris 🙂
Excellent presentation. How may human resources are required by this project? I mean how many data scientists/analysts have worked on this project, and how many total man-hours for this project?
Hey Chao – we now have four teams working on this (Sherlock Search+Inspect, Sherlock QA+Supervisor, Nostra, and Waldo). Each team has between 4 and 7 members. Only Waldo and Nostra have data scientists. We’ve been working on this for about a year. Having said that, it has expanded over the course of the past year, and it started with a much smaller group and only one team in January of last year.
Hi. Very interesting presentation and great work. Thanks for sharing. Will you share the slides showed ?
Thank you. I am glad you enjoyed the presentation. Unfortunately, we will not be sharing the slides.
Copyright © 2020 Endeavor Business Media