Episode 94: Copilot for Security - podcast episode cover

Episode 94: Copilot for Security

Apr 01, 202436 minSeason 1Ep. 94
--:--
--:--
Listen in podcast apps:

Episode description

In this episode Michael, Sarah and Mark talk with guest Ryan Munsch about the newly released Copilot for Security. We also discuss Azure Security news about Azure SQL DB, SSMS 20, Change Actor, Copilot for Azure SQL DB, Azure Container Apps, AI Prompt Shields, AI Groundedness Detection and BlueHat India and Israel.

New tab (azsecuritypodcast.net)

Transcript

Welcome to the Azure Security Podcast, where we discuss topics relating to security, privacy, reliability and compliance on the Microsoft Cloud Platform. Hey, everybody. Welcome to Episode 94. This week, it's myself, Michael, with Sarah and Mark. This week, our guest is Ryan Munch, who's here to talk to us about copilot for security. But before we get to our guest, let's take a little wrap around the news. Sarah, why don't you kick things off?

So a couple of things that I love to start with, bit of containers. So you can now, in public preview, we've got support for Key Vault certificates in Azure Container apps. So that's nice. You can get your own TLS or SSL certificates in container apps and you can use Key Vault to store them. And for GA, we've now got free managed certificates on Azure Container apps. So again, a nice free certificate. We love free things and of course, certificates are very important. Another couple of things.

Now, this is, of course, what everyone's talking about at the moment. But a couple of things on the responsible AI side. Now, some people might be like, oh, responsible AI. That's not security. But in fact, if you start looking at that stuff and I'll put it in the show notes as well, recently released a very, very short blog post on this. But responsible AI and security, you basically can't separate them. And so we've also released a couple of things on the responsible AI security side.

One is a prompt shield. So that will help check your large language model inputs for user prompt and document attacks. It's built into Azure AI Studio. So you just have to turn it on. And we've also done one for groundedness detection. So groundedness is with an LLM, a large language model.

I'm going to explain this terribly, but essentially it means that the grounding is when the response that the model gives you is actually grounded in the material that it's been trained on and the material that's been provided to it. And again, what it essentially means is it gives you better answers. These are things that if people give clever prompts, they can be got around. So having more tools to stop that is important. So go and have a look at that if you are using AI stuff.

And then the last couple of things, because I like to give a shout out for events. I know we talked about Build that's at the time we're recording this coming up in a couple of months. But also, we have for those of you who are security research type people, we have our Blue Hat Conference. And there's actually two coming up. So there's Blue Hat India, which is in the middle of May.

And the call for papers is closed, but you can apply to attend Blue Hat India, which is in Hyderabad and Blue Hat Israel, which is a couple of days, I think it's the week after Blue Hat India, that still has its call for papers open. So at least at the time we're recording this for I think another week or so. So it's the first time they've done Blue Hat in India, but Blue Hat Israel has been around a long time. I have not sadly been able to go to it yet, but I'm told it's really good.

So if you're into your security research, you should go and check that out. And because India and Israel are not that far apart, you can basically go from one to another. And I'm very sad, not that I've planned this out in my head, but unfortunately, I will be at Build this year, so I won't get to go. But if you are interested in that, you should go check it out and I'll put the links in the show notes. And Michael, that's my news over to you. I can't believe you said SSL.

You know, SSL has been deprecated for decades, right? Full disclosure, I was reading, and everyone can go and look it up in the show notes, I am reading the Azure news. I did read TLS slash SSL. I 100% agree with you, Michael. Yeah, we should get rid of that. We should. That's something, some of my bucket list is to get rid of references to SSL. Anyway, on the news, so we've just added this new function to Azure SQL database, which is advanced notifications for planned maintenance.

It basically gives you more flexibility on when maintenance may occur on your Azure SQL databases. This is something you can sign up for in the portal. It's well worth looking at if something, you know, just gives you more control over when the maintenance occurs on your instances. Next one is we have a new thing called Database Watcher for Azure SQL. This is like a big dashboard for all your Azure SQL databases and managed instances that are running inside Azure.

Essentially, it's just a way of monitoring everything without having to deploy any kind of agents whatsoever. The data is already there. We're just basically presenting it in a more concise dashboard so you can see everything that's going on with all your database instances. Next one is SQL Server Management Studio version 20 came out just recently and there's a big change that we made in the UI around using TLS. Basically the tool is now quite strict when it comes to using TLS and requires TLS.

It also is the first version of SSMS to support TLS 1.3 because it's switched over from using system.data.sql clients over to Microsoft.data.sql client, which supports TLS 1.3. But I can almost guarantee there will be some people who have, let's just say a couple of headaches in the user interface when it comes to connecting to their SQL instances. So a colleague of mine, Aaron, has written a blog post on basically how to navigate any errors that you may come across.

They're all in the name of security and they're actually really good changes, but people just have to get used to it. Next one is in Azure, we have Public Preview of Change Actor. The way I look at this is it's a way of bubbling up a lot of information in the activity feed and adding a little bit more intelligence to the results to make it easier to find out who changed what, when, and why, and where from.

It just makes it a lot easier to do that kind of data spelunking as opposed to doing it all by yourself. So a big fan of anything that sort of helps people get to the bottom of why things changed. And the last thing is, I'm just going to touch on this really, really fast, only because it literally was released a few hours ago, is we have announced Copilot for Azure SQL Database in private preview right now. I've played around with it for a little bit and worked on some of the security for it.

Fantastic product, but the only reason why I brought this up is just because we're talking about Copilot today and I just think it would be a pertinent topic to discuss. So with the news out of the way, let's turn our attention to our guest. As I mentioned before, our guest this week is Ryan Munch, who's here to talk to us about Copilot for Security. Ryan, welcome to the podcast. We'd like to take a moment and introduce yourself to our listeners. Well, first off, thanks for having me.

Excited to be here. I am a principal technical specialist at Microsoft and I've been focused on Copilot for Security well, a little over a year as one of the first people to get access. I've been tinkering and toiling with it, but also like helping to push it along and advocate for the solution along the way. My background stems from prior cybersecurity work and threat intelligence and really internet telemetry and reconnaissance.

Prior to that, I had more of a traditional background in DevOps and exchange email management and more of the IT side of the house. But where I think it's been really great for me, it's been foundational for the goal of what Copilot for Security is and that is to bring together a myriad of different backgrounds and expertises to complement what you may or may not be good at. So Ryan, let's start at the beginning.

Now when we release this, of course, Copilot for Security will have just gone GA or generally available so everyone can go and have a go with it. But what is Copilot for Security? Why have we done it? Because I know some people will think we've done a lot of Copilots recently and I think some folks get confused. Yeah, I think the terrible joke I like to make here in the United States is that basically the Baskin and Robbins of AI here at Microsoft, we have 31 different flavours of Copilot.

But where we are fundamentally different for Copilot for Security is, well, the approach we took in the inception of the design. If you look at 99%, really even probably more than that, it's like an impotentially small decimal at this point, of AI systems in the world, generative AI systems, they are designed really and predicated upon doing one thing, finding content and generating new content. And there's tons of applications for that. It can do a lot of incredible things.

But what sets Copilot for Security apart is when the team went out to build this, they took two giant steps back, looked across the security ecosystem and asked themselves, well, what should we solve and what could we do with AI and security? And I think you all are aware of this. You're incredibly brilliant people and listening to you all just even talk about the news, like it's inspiring as well as intimidating. Security is a really difficult thing. Doesn't matter where you operate.

You operate in silos with all these different specializations that are hard to replicate and bring to anyone else. And so, what they recognize is that due to all this fragmentation, AI has this possibility of spanning across all of that, collapsing the fragmentation but also upleveling all the abilities of anyone that uses AI and security to collectively operate with security in any context. Doesn't matter if you come from a database background or maybe even decided to join security from HR.

If you need a security context, you can ask a simple question and get back a profound result. And that has been built into the core architecture and it's why it's not only so unique and different, but in a lot of ways, we're leading and we are laid apart from what you'll see inside of other AI solutions and across the market in general. So, in the news, I mentioned that we've just released a private preview of copilot for Azure SQL DB. So how is this different to other copilots?

As you mentioned, the Baskin Robbins quote, we have copilots for absolutely everything. So how is this different from other copilots? Ultimately, what is the job of copilot for security? Yeah. That's one of my favorite questions to answer and speak to. And candidly, it was really hard to articulate for a long time. And then about a month ago, we finally got this help. It's actually, there's this great paper. It's written by the Berkeley Artificial Intelligence Research Center or BEAR.

It's like a digital bear if you go to the website. And the paper is the shift from models to compound AI systems. If you look at what copilot for security does and how it's built and how we have to go out and solve the problem of fragmentation and complexity, you can't do that with a monolithic model.

In fact, and this is what I'll say is the headlining quote from the article or the research paper, state of the art AI systems are increasingly obtained by compound systems with multiple components and not just monolithic models.

So in security, if you were to go out and train at LLM and let's say you do exactly what open AI does, use Microsoft's supercomputer, which is the fifth largest in the world, train it against trillions of parameters and have it spin away for months, if not longer to come out and arrive with a new model. Well, the moment you get a new vulnerability or a new system that you need to incorporate into a security context, that really expensive laborious model train is now immediately out of date.

So it doesn't work well in security, just like it wouldn't work if you were to apply a model to healthcare or any other similarly specialized in highly fragmented environment.

So what we've done, and this is really where it separates it from the other AI systems across, well, other copilot systems across Microsoft is that we work with this compound AI system predicated upon orchestration and a plugin architecture, as well as a number of different grounding mechanisms to ensure that we anchor in truth. So for example, inside of security, you typically work with two or three sources of threat intelligence.

We provide MDTI, Microsoft Defender Threat Intelligence for free as a grounding mechanism to help ensure that we anchor on a source of truth, but also that we believe fundamentally that threat intelligence should be baked into everything. As an organization, you will typically have two or three other sources so that you have some level of due diligence to confirm your intelligence and confirm anything that you're assessing.

So in connecting to those other systems, you wouldn't train a model against it, but you can use a plugin that would connect into that API, understand where to get an indicator, provide a threat intelligence summary back, and collapse it all by using the model itself. So it's kind of a combination of the best of both worlds in that we can use the powers of generative AI, but still supercharge it with this extensible architecture that comes inherent to building anything in Azure.

So I got a question. So say someone's got Microsoft Defender, a couple of different technologies, or maybe all of them, Microsoft Sentinel, and Purview Intune, etc. What does security, excuse me, Co-pilot for security, what does that do for me? What does it add and what does it change that isn't already available in the existing products and technologies? Great question. I'm going to break this down in a few different ways.

The first thing I like to think about is, back to the core design, what we recognize pretty early on with what Co-pilot needs to do is it has to operate in the capacity and concept of workflows. Workflows can begin and start anywhere or exist at any state. No matter the case, we have to be able to interact with it and augment it.

So when we started off with some of our very first customers, I'm talking about the first ten into the platform, one of the things they quickly realized is that, well, when they conduct incident response, that actually starts in something like ServiceNow. And so they went back to us and said, hey, we are Defender customers and Sentinel customers through and through, but we still send this over to ServiceNow to track our incidents.

If we don't have a ServiceNow plugin, this doesn't do us a whole lot of good or doesn't provide us a lot of advantage over to our existing ecosystem and how we tie everything together. And so one of the things that we've recognized is that even if you spend all of your day inside a Defender or Purview or whatever the solution is, you can go out and use whatever you've done inside of those systems and cross-referenced it with something outside of the Microsoft security ecosystem.

And that's not really it or even outside of the security ecosystem itself. Maybe you go over to a different IT system. One of my favorite things to always bring up is a great example.

I used to work with a bunch of ExCisa people here at Microsoft and they would talk about one of the first things they would do post breach is they would analyze who is attacked and then compared against an HR system and discern similarities between them such that they can understand what the motive was for an attack. And that is something you can't really discern in Defender or Purview or otherwise, but you do need to go to a secondary HR system to figure that out.

Now, so that kind of like tackles like the one part of it, which is like the general need of like all these different multimodal workflows that can go in any number of different directions. The other part of it is just how can we drive efficiency or how can we introduce net new competencies that were maybe difficult and isolated from before. And so when I think of that, there are two examples that immediately come to mind.

About probably six months ago in our preview, we introduced the concept of script analysis. And when we talked to some customers, they didn't even do that. They did not try to understand how a script would execute, how code would maliciously go out and do something in a system. They would either pass that off to a partner or maybe call in a contractor for those special cases.

But now out of the gate, inside of Defender, they now have an ability to analyze a script and have it broken down in a way that anyone can understand it. And so what used to be like this highly specialized, uniquely reserved skill set for the most capable people inside of a security organization now could be something that even the most junior analysts could pick up.

And the other one that I like to point out, and I think this is also helpful, is that when you look at some of the more complex attacks, you have to go through and analyze large pieces of information or multiple different sources or alerts or otherwise. And there are inherent advantages by being able to collapse all that into a summary or into the most profound elements that would lead you in a direction to then take in more directed and informed action.

And that's where we've seen impact for people that like to operate in those systems. And that's why we built what is called an embedded experience inside of Defender in Purview, Entra, and Intune. And there's more in the way that helps with exactly that. And it's one of the things that our customers love the most about where CoPilot for Security is going and how it will be there with them no matter where their workflow operates.

So Ryan, can you, obviously there's many, many scenarios, because that's the point where customers could use CoPilot for Security. But could you walk us through a typical scenario that you've seen folks use it for? I'll build upon the script analysis portion, because that's usually critical to what is an incident response process.

And what CoPilot will do or how they interact with CoPilot in those cases is it's either with them to kick off that initial analysis, so understanding alerts, understanding constituent elements of that incident or that incident response process, such as maybe a script analysis, a file analysis, or even looking at things like user risk or device risk attached to the incident itself.

So then the natural triage process then becomes something that's informed by artificial intelligence, and it's something that becomes more efficient and more approachable for everyone. There are other scenarios we do as well, some really great ones and some things that I really like to talk about is the way that now it's infusing threat intelligence into everything naturally.

A lot of people, if you were to go out and ask them, how do you go and make a profile for a threat actor or how do you understand the impact of a threat actor in an active incident? And usually that is something that people have to learn from that moment and take it forward to inform the context of all the elements of their incident or all of the things that need to come next in that active response.

Chances are you can do things like type manatee tempest into a prompt bar, and from that, that's all you need to then get the entire profile recommendations or even considerations of risk that you would take forward and use to respond to a breach, respond to an incident, and reduce any further risk or mitigate any additional actions taken by the threat actor. So those are some immediate stories that come to mind, and there's a lot more on the way.

And I think some of the ones that excite me the most and what we're starting to see now with some features we roll out is now how we're starting to impact IT operators and how they are looking at things like comparing device configurations, understanding how device configurations then can have a security or even a threat intel context, and starting to fuse together what have been two different sides of the house and having them talk again and figure things out collectively.

I know data security comes up a lot around AI, so tell me how the copilot for security and data security kind of intersect. This is, you're right, it's something that usually will come up almost immediately in any conversation with anyone looking at to bring AI into their organization. And there's a couple different facets that are considered.

First and foremost, are they exposing their organization to any intellectual property violations, meaning based on how the AI was trained or what data it sources for responses, is it using something that doesn't belong to the customer or it doesn't belong to Microsoft? And I've even had conversations with different leaders across a number of very large entities that have stated or at least did state, we will not bring in AI until we can figure out the intellectual property component against it.

Now the second part of what you've brought forward in terms of the question, when it comes to data security, how should I start to think about it in relation to AI and in relation to copilot for security? Part of it, and what I like to usually explain first, is that how copilot for security operates a little bit differently than most AI.

So first and foremost, we're not working from a capacity of training off of what customers are doing to prompt within a system, meaning we are respecting that data residency. We are respecting that privacy the customers should have to operate in an AI environment. And that's part of what I would call table stakes to have an enterprise AI solution.

Next, when it comes to what we are using to allow copilot to operate, this is where the architectural decisions we made very early on for the problem we are trying to solve by extension put us in a place that is much different and in some ways helps even better establish a AI security story than what other solutions can talk about. So copilot is not going to do something like side load your data.

In fact, a lot of different AI systems, one of the things they'll do is they'll take anything that they want to do with their AI solution, ask you to load your data into a vector database, and then from that they create then a series of embeddings to then understand that data. So really, it's almost like back to the SQL conversation before. It's like just a series of references into a database.

It is not quite doing the same as trying to, what copilot does, figure out the best plugin to respond to a user, select a skill, go out and then access that system on behalf of the user's authentication and permissions to that system, and then reason over what's there and only return the necessary results. So there's nothing stored that is necessary to make the system work at the onset from that system. We're not loading all of your Sentinel data.

We're not loading all of your Defender data or ServiceNow or otherwise. We're reasoning over it in place.

Now there are things we do to introduce knowledge bases, and I think that probably gets into the next part of the conversation when it comes to AI, and that is with a system that introduces an ability for anyone to ask simple questions and get profound results or information back, that starts to naturally expose more things than they probably had thought to access or even tried to access before because the limitations of, let's say, maybe a query language

or whatever interface prevented them from getting to that data, despite the fact they maybe should have never had access to it. When it comes to then data security and what you should think about when it comes to your organizational AI journey, part of that has to incorporate looking at user permissions, seeing what they do have access to, where that data is exposed, and if they do access it, what would the consequences be?

The more of a consideration it becomes is based on the AI systems you have. Does it train off of your data? Does it sideload your data? What data is necessary to make those AI systems function? Those would all increase or all bring about a certain degree of stringency needed to then understand what controls and what protections you should have in place.

Generally speaking, where we are with Co-Pilot and what I recommend all customers are, figure out your data story first, figure out your user permissions, and we'll respect that. That's part of the system design to reinforce what you should already have as we'll call it sound data security principles. One thing that took my interest was the extensibility aspects of this. You mentioned the word plugins before. What's the story there from a developer perspective?

Customers can get really excited about what we have coming and what is available for Co-Pilot for security. We have taken forward architecture from OpenAI where they introduce what are called OpenAI plugins. In those plugins, it declares a manifest. The manifest becomes a mechanism to connect to a system, a database, or in some cases, even just redefine how data is classified. In Co-Pilot, we support three different types of plugins.

The most common one I expect customers to use will be just an API plugin. We've created a new standard with Co-Pilot for security that creates a manifest that allows you to do more beyond what you could with OpenAI. You can do things like use skills to invoke sub-skills. A skill is a mechanism to understand something out of a system. For example, one of the skills we have for Defender is summarize an incident.

Another skill we have for ServiceNow under that plugin is we have find incidents, summarize incidents, write a summary of a workflow, then back to a ServiceNow incident as a comment. Those are all different things that can be invoked, and they all reflect different things you could build with an AI plugin. What makes ours unique is that we have the possibility of using a skill to invoke a secondary skill.

We also have the possibility of allowing description against those skills and providing some feedback to the users to allow them to then put in different parameters against them. What this does is, at the core of our AI orchestration engine, is it allows us to be more effective with how we select what to respond with and how to respond to a user to best enable them to be successful in their prompting experience. Beyond the API plugins, we then have KQL plugins.

These are really great for all of the reasons you'd expect. Most customers that work with Microsoft products, they have libraries and tons of different KQL queries. You can take those, build them as a plugin, and then allow them to then be something that copilot and the orchestrator can use as a mechanism to respond to a user prompt. The third type, and this one's novel and it's something net new in the age of AI, is what we call our GPT plugins.

GPT plugins are ways that we defined and labeled data. For example, the one that I talk about to customers first is just the concept of defanging URLs or making URLs rendered inert so that no one can accidentally click to it and go to some website that's going to do all kinds of bad things, maybe just even cause you to invest in a foreign government. The process of defanging a URL takes the URL, renders it inert, and by adding in all these extra characters.

That's not a definition that a large language model would have. It's not a definition that copilot for security has, but we can define that in text in the GPT plugin manifest. By providing that definition, then copilot for security knows that any time you say something like, I need to defang this URL or this indicator, I need to render it inert, or I need to make sure that it's safe so that no one clicks on it.

Well, through that GPT skill, it then has that understanding and can provide that as a prompting mechanism for anyone using copilot. We've talked about how we handle data, but of course, customers and people using copilot for security are going to want to bring their own data in so it can give them some insights, but how does that actually work then? Is it the plugins? Do you upload things? Well, you're absolutely spot on.

How we think about customer data or even working with data belonging to customers is through the concept of what we call sources. You talked about plugins. That is absolutely a source. They can make a plugin connected to some data store or a specific system they have in their organization and then that becomes something that copilot can use in reference. The other concept we have of a source is what we call a knowledge base.

In a knowledge base, we just released two different pieces of functionality. We have a third on the way. The first two entail what is called file upload, which is exactly what it sounds. Upload a document into copilot for security and then it can use that in reference to responding to you in a prompt response process. Before I get into the other one, it probably makes sense for me to explain what those documents are useful for.

In any type of security situation or even any type of IT situation, you'll have standard practices and procedures. Maybe you have a standard template you use when you write an incident report. Maybe you have a template to use when you issue a takedown request against a site that's impersonating your company's organization. Any number of different files could represent something specific to your organization that is pertinent to any workflow and aligning it against what your company would expect.

That's what knowledge bases achieve. File uploads is one mechanism which we provide that. The second is Azure OpenAI search. Through that, what we do is we create an index and we can put all of your files that you would like to have operate in that context of copilot for security, then become searchable within that semantic index that copilot can use to infuse that context inside of your workflow inside of a session.

That is helpful in and of itself of taking things like what we talked about, writing a report but making sure you write that report in the format your company expects or taking all of the IOCs such as registrar information from a malicious domain and putting that in the email template you would use and then exporting that and having that email ready to go. It drives a lot of efficiency and aligns copilot against your organization.

The third, which is we're going to talk about this a little bit future functionality, on the horizon we will eventually introduce the concept of documentation sources.

For example, MS Learn could be a documentation source where if we need to know how do I go out and configure Microsoft Sentinel, well, we could then have a location in which we pull in information from Microsoft Learn and you can start to pair that with the individual prompt responses and get a more informed or specific tailoring of information against those setup policies and procedures.

Ryan, so copilot for security has just gone GA so now everyone can go out and can be let loose with it but what would be a good way for people to get started because of course there's so many things you could do with this. What's a nice like baby steps into using the product? Yeah, great question.

Where I would start to align people is first, we've talked about copilot a lot today and that's piqued your interest and the next thing I'd encourage you to do is go out and look at some of the videos and publications we have from webinars and learning series and kind of get like that next level understanding of what functionality that we provide today in copilot because there is a core functionality that will be there out of the box and then

there of course is what will be down the road and what we'll add and then the final element is what we talked about the custom plugins of how you can extend it yourself because at the end of the day it's all about using copilot for security to align against your workflow.

So, once you get a good understanding of that, if you go to Microsoft Learn there is a series of documents or entire documentation section that will take you through the steps of spinning up your own copilot for security instance, getting users into it and starting to connect it to all of your different sources to give you the best copilot prompting experience and the great thing about this is it's really it is incredibly approachable to get this moving and get prompting in the same day.

All of the customers we've on boarded today are to this point they are prompting within the same day of activating copilot for security. It's probably time to start bringing this thing to a close. Ryan, so one question we always ask our guests is if you had just one small final thought to leave our listeners with, what would it be?

The final thing I'll leave with our guest is to think about how they've experienced working with computers today and how they need to start to think about working with computers in the future. Traditionally, if you've worked with a computer, you've maybe written a script, you have a discrete input, you get a discrete output, but now working with computers is going to become a conversation where you can ask anything and receive any set of information back.

In a lot of ways, you will want to trust it and look at what is presented to you, but as we've learned for our long and extensive history in security, there should always be an element of trust but verify. As you start to work with AI systems, what I would challenge you to think about and consider is how are you seeing the AI system working? How are you knowing what information is being sorted or sourced and cited? Finally, how are you putting that into action in a responsible way?

Just like any conversation such as we are having today, at any point in time, you have the option to say, Ryan, you know what, that doesn't sound right, I'm going to call you on that or I've had enough of you, Ryan, and I'm done with this conversation. You should start to think of treating AI systems in the same way where it continues to be an extension of trust and you should always ensure that the AI is meeting your trust throughout the entirety of the conversation.

Just so everyone knows, we will have links to everything that Ryan just mentioned in the show notes. Again, Ryan, thank you so much for joining us this week. This is a really exciting product. I think it's great to see and I think we'll learn a heck of a lot more as people start to use it more, the capabilities that this kind of AI brings to the table. Again, thank you so much for joining us this week. To all our listeners out there, we hope you found this useful.

Go ahead and kick the tires on Copilot for Security. While you're doing that, stay safe and we'll see you next time. Thanks for listening to the Azure Security Podcast. You can find show notes and other resources at our website azsecuritypodcast.net. If you have any questions, please find us on Twitter at AzureSecPod. Background music is from ccmixtr.com and licensed under the Creative Commons license.

Transcript source: Provided by creator in RSS feed: download file