LIVE FROM GARTNER

Live blog from Gartner IAM

In partnership with

Add file uploads instantly with Pinata’s developer-friendly File API

As a developer, your time is valuable. That’s why Pinata’s File API is built to simplify file management, letting you add uploads and retrieval quickly and effortlessly. Forget the headache of complex setups—our API integrates in minutes, so you can spend more time coding and less time on configurations. With secure, scalable storage and easy-to-use endpoints, Pinata takes the stress out of file handling, giving you a streamlined experience to focus on what really matters: building your app.

Welcome! Wanted to try something different. For the next couple of days I’ll be live blogging as I attend different sessions at Gartner IAM. Sessions to session I’ll update this blog with my thoughts. Key points made, and probably some snarky comments.

Opening Keynote

Machine identity is a thing , like a REAL THING. I’ll find the stat and update later, but the take-home Is that machine identity outnumbers humans significantly.

Centralized decentralization. IAM has grown to be managed not just by one unit but by a collection of centrally orchestrated business units.

Standards

Here’s the game: guess which one of these things is not a real standard. See below for the answer.

IAM Resilience

Wow pretty cool seeing this as apart of the keynote. A topic we don’t talk about enough! Shout out to the crew over at Acsense. They’ve been beating this drum for a while now.

The quote of the Keynote so far “Must build a culture of resilience” - Bars.

While I agree with the statement, culture building is tough!

Characteristics of Resilient Organization

  • Preoccupation with Failure

  • Prioritize Resilience

  • Measure ( must be meaningful metrics!!)

    Didn't get the rest of the characteristics as I got pulled into another meeting, I’ll try and see if I can find out what they were and update this.

DAY 2

In typical conference fashion didn’t get a chance to attend as many sessions as I would have liked. ( boooo work) But did attend a VERY good session on AI Risk today on Day 2. I think I might actually write up a blog on this later…but for now here’s the breakdown

Technical Insights: Top Generative AI Adoption Security Risks and Mitigations - Dennis XU, VP Analyst at Gartner

Worth saying again that I loved this session! Mostly because it dealt with something that keeps me up at night when I think about the evolving risks tied to AI models. As we push the boundaries of what AI can do, we’re also unlocking new challenges ( well old challenges really, but I’ll get to that) particularly in security.

Here’s the gist:

1) Data Loss and Sneaky Prompts

A security researcher showed how uploading files to a Custom GPT can lead to significant data leaks. ( I believe Dennis said this happened at Blackhat…I’m not surprised). By using German prompt,s they tricked the system into generating download links for private files. ( YIKES).

2) Malicious Instructions:

So, apparently, prompt injection attacks became 40% more effective with GPT-4. Bad guys are learning to manipulate AI’s outputs using algorithms like GCG to increase the likelihood that the AI complies with their instructions. ( YIKES)

3) The Remote Co-Pilot Risk

Imagine embedding malicious instructions into external files or links from which an AI System pulls. (Does this sound familiar to anyone??) It’s SQL Injection all over again.

4) Prompt Engineering is what the cool kids do.

One of the big takeaways? If you want to avoid AI diasters, learn the art of asking better questions. (I swear it’s like someone should do a masterclass about this.) Prompt engineering is critical - not just for better outputs but for safer interactions

5) User Training and Guardrails: A must!

No amount of technology can replace informed users. ( Yet…) Training people on the limitations of the AI and equipping them with the tools to validate outputs in non-negotiable. ( And having said that, we still won’t do it….)

Some Recommendations

  • Lock down access polices and implement guard rails to keep data secure

  • Train your team in prompt engineering so they ask smart, specific questions.

  • Use retrieval validation to ensure the data your AI accesses is accurate and relevant.

  • Be skeptical of AI systems that promise “agentic workflows” and give them clear boundaries.

Big Picture

AI is evolving fast, but so are the risks. It’s no longer just about building cool models, it about building secure and responsible AI. ( Queue Dr. Malcom)

The future of AI isn’t about intelligence, it’s about trust. And trust is something you earn, not something you build.

Day 3 - Shared Signals BABY

Ok so look it was a crazy week…check the blog section on Monday as I’ll be posting my full recap on the week.

But I wanted to at least write about the Shared Signals event I had the chance to go to at Gartner. It was a really cool setup in which multiple vendors showed their interactivity through the use of the Shared Signals Framework ( SSF).

For a quick primer on SSF go here:

So here’s the coolest thing I saw at Gartner all week. It was an “integration” between AppOmni and Okta ( at least I think it was Okta….it’s been a long week). But here’s the gist: AppOmni converted events from Salesforce into Shared Signals events. A current user has Salesforce access and is looking to use a third party utility to export data from Salesforce. When that user is removed from Salesforce, AppOmnni sends a signal to Okta killing all access and sessions.

Viola…Zero Trust BABY!!!

Ok, not really, but kind of. The point of zero trust was to have interactive systems that can constantly authentication and authorize based on user context. SSF lights the way of that path.

Check out AppOnmni and Okta for more details,

Reply

or to participate.