Despite the enormous potential of AI, its impact on security operations has often fallen short, with teams grappling with privacy concerns, misaligned priorities, and overhyped vendor promises. Yet, security teams at Samsara and Ekco are bucking this trend, leveraging AI to deliver tangible benefits such as time savings, enhanced decision-making, faster incident response, and stronger data governance.

In a recent webinar on optimizing security operations with AI, security engineers Hela Lucas of Samsara and Kieran Walsh of Ekco shared their most impactful AI use cases and offered actionable recommendations for engineers looking to follow suit. From streamlining spam detection to automating ticketing and reporting, their experiences show how AI is already helping security teams achieve more with less.

Real-world applications: how AI drives efficiency at Samsara and Ekco

At Ekco, a security-first managed cloud services provider, SOC Engineer Kieran Walsh describes how AI has become an integral part of daily operations, assisting with tasks like writing Regex, debugging queries, and automating data summaries.

This approach eliminates tedious manual work, enabling analysts to focus on higher-value activities. Kieran highlights how AI’s ability to integrate new tools into workflows within minutes significantly reduces onboarding time and accelerates productivity.

“Using AI for security analysis is a huge advantage,” he says. “It can really help by providing additional context and explanations in a more human-friendly way.”

IoT industrial technology leader Samsara, while in the early stages of AI adoption, has seen its own wins. Security Operations Engineer Hela Lucas shares how AI-generated vulnerability summaries help simplify technical details for system owners, providing clear instructions for remediation. This process not only speeds up response times but also ensures that all stakeholders understand the issues at hand.

“We have software that identifies vulnerabilities, and we use AI to translate this technical information into a more readable language,” she says. “This means when a system owner is notified about a vulnerability, they get a clear explanation of the issue and instructions on how to fix it.”

Let’s take a closer look at how these teams are leveraging AI to optimize key processes.

Eliminating tedious manual work for analysts

Kieran describes himself as a “daily AI user” who outsources simple engineering tasks to AI.

“I use it to answer questions like, ‘What have I not considered? What steps should I take? Does this look malicious?’ Also, for anything that’s high development and low output, we use AI to essentially speed up the development process.”

Kieran also says that any tedium associated with building and integrating new workflows is “completely removed” thanks to AI. “For example, we can connect a toolset into our workflow automation platform, Tines, ask AI to summarize the data in a presentable format, and then throw it into the case the analyst will work from. If someone gives us a new toolset, the time to onboard and integrate it into the analyst’s tech stack is five minutes. It’s already made things a lot quicker.”

Detecting and preventing spam

AI-powered spam filters have “drastically improved the quality of life” for Samsara’s on-call team, Hela explains. “Getting paged for spam is a huge pain point because it’s a false escalation.”

When asked how the team manages human intervention as part of this process, Hela says, “It’s important to ensure there’s some kind of action or button that a human can use to confirm whether something is the right action. While the AI is there to detect spam, we still have an expectation that a human will review whether it’s correctly categorized or if the AI was mistaken.”

Saving hours on ticket creation and delivery

AI workflows have greatly improved ticketing times and processes for Ekco. “When a case needs to go to a client, you’ve got all this information you need to translate into customer-focused language and then submit it to another platform.” Kieran says.

“Using AI, you can create a summary, which the analyst can edit before sending the ticket. This saves us so much time. I think it works out to be around 15 minutes per case.”

Guiding analysts with playbooks for threat detection

Ekco’s security team also uses AI to analyze attack indicators, summarize events, and determine whether they’re malicious, and get advice on how to proceed. “It’s not the be-all-end-all,” Kieran said, “but it can do a bit of guiding.”

Samsara uses AI to generate playbooks for threat detection, too. “A human still needs to review and ensure accuracy by providing context around how our company functions,” Hela adds, “but the AI offers valuable ideas for actionable steps and specific detections.”

Organizing security inboxes in the background

One of Samsara’s top administrative use cases is the AI-powered categorization system for the team’s managed security inbox.

At the moment, ensuring emails are correctly categorized still requires some input from a human – Hela’s goal is to refine it so that it “autonomously triggers appropriate workflows with minimal human interaction.”

Solving classic challenges with SOC reporting

One of Kieran’s top priorities for 2025 is using AI to create data-driven reports and eliminate manual work for Ekco’s security analysts. “We want to produce concise, data-driven reports that allow analysts to avoid writing anything manually. We want to get them to the point where they’re editing the text AI has supplied, and creating useful graphics that customers can easily understand. No one wants to see a wall of text when they’re paying for a report.”

“The beauty of an analyst’s role now is that we can allow them to actually focus on trend analysis and add meaningful context to that report. The process of using AI is going to improve the efficiency of our report generation and also mean that the quality improves over time, and that’s really priority number one.”

Overcoming limitations and AI’s “black box” problem

When asked for his thoughts on AI’s limitations, Kieran says, “While AI is a great enabler, it’s really important to recognize that it’s not infallible.”

Hela agreed, adding, “When things go wrong, AI can be a bit of a black box. Determining whether something is a threat is sometimes a binary decision. AI can assist with that, but if it makes a mistake, it can be very difficult to understand why. With a human, you can just ask and then address it with some kind of training or knowledge.”

AI hallucinations are another concern – falsified IP lookup results and EDR data are two examples Hela shares. Kieran also relays his personal experience with an AI-hallucinated answer to a simple question about how to toggle an advanced setting in Azure.

When asked how companies can overcome known limitations like these, Hela says it’s important to, “educate users on how it works and how to interact with AI-generated data.”

Kieran agrees, adding, “Making sure you maintain human oversight in basically all scenarios is another way to mitigate risk.”

Kieran also encourages analysts to understand how AI tools are using their data. “One of the key questions you should ask yourself is, where is your data going? And how do you opt out of it being used?”

Preventing “shadow AI”

Hela highlights her concerns about shadow AI – employees sharing confidential company data in chatbots without security’s awareness.

“You don’t have any logs from it, you don’t know what’s being shared or how the AI is using that data,” she says.

While Hela doesn’t disclose Samsara’s specific shadow AI policy, she was happy to share some general recommendations. “Make sure there’s an actual policy around the use of AI,” she says. “If it were up to me to write a policy, I would probably say that you can only use approved AI tools, because that means you have governance and control over the data.”

Blocking unapproved AI tools with a proxy and regularly reviewing EDR and network logs are also good ways to protect sensitive company data.

AI prompt tips for security teams

Hela and Kieran both shared prompting tips for other security teams looking to leverage AI.

“For your first prompt, ask the AI to write a good prompt for what you’re trying to achieve,” Hela suggested. “I find that the AI-generated prompt is usually more effective than my attempts at writing prompts.”

Kieran advises being as clear and descriptive as possible. He also likes to write prompts in third person – for example, instead of prompting the AI to generate something for “me”, asking it to do something for a specific person, like a junior security analyst,” Kieran explains, “This subconsciously helps me write a better prompt.”

While these wins showcase the immediate benefits of AI, Hela is quick to add that they’re not relying on AI for critical decisions. “As we build more trust, refine the prompts, and identify edge cases, we expect to integrate AI more in the future.”

Optimizing security operations with AI

Catch up on the full conversation and learn how your team can optimize security operations using AI with the full recording.

About the Author

How AI Helps Security Teams at Samsara and Ekco Move Faster and Enhance Decision-MakingTines is the smart, secure workflow builder for your whole team. In 2018, Eoin Hinchy and Thomas Kinsella founded Tines to solve very real pains they experienced first hand: their teams were overworked, overstretched, and overwhelmed. The influx of alerts and systems was slowing them down and made them vulnerable. This motivated them to develop Tines, a smart, secure workflow platform to securely build, run, and monitor the world’s most important workflows. We believe that by combining automation and human ingenuity, organizations will be more efficient, more secure and have more engaged, happier teams. You can find us at our company website https://www.tines.com/

Source: www.cyberdefensemagazine.com

Leave a Reply

Your email address will not be published. Required fields are marked *