>> Hi, everyone. Welcome to today's IT showcase webinar.
Redesign for the cloud: building
a cloud-first network at Microsoft. My name is David Lef.
I am a network architect in Microsoft IT.
I've been with the company for about 18 years and I'm
primarily responsible for Cloud
and Edge that working at Microsoft.
This is my colleague Steve Minor. Steve, could
you please introduce yourself?
>> Hi. As David mentioned, my name is Steve Minor.
I'm a principal program manager,
been here for eight years.
And 28 years of industry experience in
engineering for IT in general, multiple firms.
Started out as a technical engineer, dev lead.
But in the last 10 years I've
moved more towards architecture and program management.
Thank you.
>> All right.
We have a pretty packed agenda.
We're going to be covering a lot of content.
So, we're going to jump right in.
If you have any questions during the webinar,
please type them in the question-answer window and
we'll answer them during the live
question-and-answer at the end of the presentation.
First agenda slide here.
We're going to talk to you about an overview of some of
the most- I wouldn't want to say,
most important efforts,
but large efforts that are really designed
to cope with the fact that the company is
embodying the strategy of mobile-first, cloud-first.
So, the complements to that,
on my side our Internet-
and wireless-first strategy that we have.
We'll talk a bit about
the Microsoft IT networking environment today,
initiatives that are going to change it and
how it's going to evolve into the future.
We'll talk about how cloud access is changing,
software design parameter specifically
some of the things that we're doing in Azure.
And then we'll have some key takeaways.
And there's more. So, there is
an opportunity to change or
adapt and allow some of our legacy applications to
go to the cloud without having to make too many changes.
But then, also, make sure that we keep
a reasonable amount of
service level and security posture,
accelerating our clients into
the cloud and our applications in the cloud.
Some of the options that we have around
accelerating that cloud adoption and
some practices that we've developed around
infrastructure and platform as
a service-computing architecture.
First thing we're going to cover is
our internet- and wireless-first initiatives.
They're really a recognition of
the fact that changes
have already happened in our environment.
People have already gone to mobile devices.
They're going to services
that are on the public internet.
They are working from anywhere at
any time from any device.
We definitely do want to make sure that
we support this new working style
of being always connected,
that we do maintain a good security posture,
and that we make the right investments.
And we characterize that as eliminating technology debt.
Right? Technology right now,
it may still serve
a purpose but we don't want to over invest in
it in the matter we know it's going to become obsolete.
So, what we're actually doing is,
we're looking at the business and
engineering scenarios that are
happening in our environment but
they're really not optimized
for today's corporate network.
And we're going to be modifying what's
left of the corporate network as we continue
to move our services to the public internet,
so that it's really optimized for
those scenarios that going to
stay back on our private network.
The user experience with clients,
they're moving to the internet side
and they're going wireless.
Another thing that we've become aware of is
the traffic patterns have changed in a very dramatic way.
So, where we used to see a lot of east-west traffic
between user buildings and campuses and data centers,
now traffic is becoming what
we characterize as north-south.
So, it's actually leaving our network
going to a public cloud destination.
So, we've profiled what's
happening in our branch offices
and we've actually seen that
70% of the traffic is destined for things
like Office 365 and Dynamics online.
We're also seeing in these branch offices that
the predominance of devices
that people are using are wireless.
So, less than 30% of
our wired ports are being used across the enterprise,
and in some of these branch offices less than 10%.
And what we're doing to adapt to this
is we're really moving to
a model where wireless internet is
the default connectivity for our users and our clients.
So, they start out on a wireless network.
We drop them off on the internet that gives
them the shortest path to their services.
It also allows us to, again,
tailor what's left in the intranet to specialize in
business and engineering functions that
really need to stay back on our private network.
We do know that we have
a very diverse networking environment.
Probably one of the most diverse ones that I'm familiar
with in terms of
an enterprise that is a high tech company.
So, if you look at this diagram
here on the left hand side,
you're going to see multiple, the different types
of clients that we have on our network.
We have everything from user clients that are remote,
user clients that are also on premises.
We have a very robust and continuously growing population
that are considered to be internet of things,
connected devices that don't
actually have a person behind them,
but some type of service and technology.
Then we also have dedicated systems.
So, we're an engineering company that
develops software and services and we have a lot of labs,
we have a lot of things like
HoloLens and things like that
that are showing up on our network on a daily basis.
To be able to get them to
the resources they need on the right hand side,
whether they're in the public cloud or on the internet,
in a private environment or on an intranet that
is oriented toward a business function
or a particular group,
we have to be able to provide
them the networking services they need.
So, they need access services,
they need core transport services,
and they also need, especially
when we change environments,
and we need to have security
policy and security monitoring.
There's going to be a point there
where we can inspect the traffic.
We can prevent compromises or we
can prevent them from doing things they
shouldn't be doing from a security perspective.
There's also the opportunity to optimize,
and this is where things like software-defined WAN and
technologies like that may come into
play at some point or acceleration caching.
So, really you know we don't
have one single networking environment.
We have many that we're trying to stitch together again.
The whole point of this is to
allow our people to be productive.
So, we want to give them the most efficient access
to the resources they need,
while still giving them good quality of service
and a good security posture.
The major initiatives that we're working against in
the networking space right now are around automation.
So, being able to use
software-defined networking to its fullest extent,
being able to automate the routine tasks and being able to
self-service enable a lot of our networking services.
We're transforming our data center at the same time.
So, we're standardizing our labs
and our production data centers,
whether they're on-prem or off-prem in the cloud,
and then trying to make, again,
that transition from on-prem to off-prem,
public to private, very seamless,
and have common policy definitions.
We are moving, again,
all our applications and
our clients to an internet-first posture.
So, in doing that, we still
have an on-premises environment where our clients live,
even if those applications are off-premises in the cloud.
So, we have to optimize that egress path to the cloud,
and it does give us an opportunity to
use different providers for network access and transport.
So, instead of dealing with
traditional network service providers or telcos,
we are looking at leveraging internet service providers.
We've been looking at leveraging
Azure for site-to-site transport,
that transport between data centers.
We're also moving very rapidly toward IPv6.
So, we've exhausted our IPv4 space many times over.
We're enabling the entirety of our enterprise network for
IPv6 at first in a dual stack manner.
But in the future,
we will move some of
our internet networks to the IPv6 only.
We're also very conscious of security.
So, we want to make sure
that we know what's accessing our network.
So, we may still let it on but we have the ability to
actually fingerprint and identify
what type of device it is,
potentially who's using it,
and what they're actually doing.
In terms of constructing policy,
we're moving away from policy that's defined
by things like IP addresses and protocols and ports,
the things that are really identity-based whether that is
identity as a user or an application or device.
And the last thing on this slide is, again,
our user base for selecting
devices now that their native
connectivity is wireless.
So, we're upgrading our wireless infrastructure
globally, 802.11ac or beyond.
And we're also optimizing what's left of
the remaining wired infrastructure around it.
So as we evolve into the future,
if you look at our current state,
it's not bad. It does the job.
But as we continue to move
down this wireless internet path,
as our applications move to the cloud,
we do need to make some changes.
So at our current state, we have
multiple modes of access.
So you may get a different experience
whether you are wired or wireless,
whether you are remote or local.
The future state is you
are going to get the same level of access regardless of
where you are coming from as long as we
determine that device and that person is
trustworthy and they have the right permissions
to that resource they're trying to reach.
They are going to get the right level
of access regardless of their connectivity,
methods or medias used.
We do have a dependency now on
our current state on being on
the private intranet and being wired
for a lot of our device management.
We know that we need to move away from that.
We need to enable those devices to be managed from
whatever connectivity method and
whatever network they originate from.
So, we characterize that as device management
through assuming wireless intranet.
We also have a variable user experience.
So again whether you are on-prem, off-prem,
you are going to have
a different experience of getting
to public cloud services.
And it may not always be the most
efficient network path to
get to things like Microsoft Azure
or some of our other public cloud services.
So, we are looking at the patterns that exist
whether you are in a Microsoft building
or whether you are home or in a hotel,
whether you are remote accessed in
or you are just simply using native internet connection,
and we are allowing those clients
to use what we believe to
be the most efficient path to those resources while,
again, still maintaining a good security posture.
And then,last one there is,
I spoke before that we've
exhausted our IPv4 space many times over.
So we are re-using address
space internally in Microsoft and we
are having to do things like NAT in terms of our environment.
When we get to IPv6 native,
we start to reclaim some of the IPv4 space.
So we'll start to take that down.
But IPv4 in our environment won't disappear overnight.
So, it will still be used.
There will still be things that use it,
but the predominance of
internal Microsoft networking in the future will
be on IPv6 as the default.
Our current cloud internet access really was
designed many years ago,10 or more years,
back in the days
when I wont say that the web was brand new,
but most of the productivity that
we did in our applications were on our private intranet.
They were in a data center that we owned in total
and going out to the internet was needed
for business use cases but it wasn't as
critical to the business as it is today.
So, we have a traditional hub and spoke network topology.
Out of eight hundred or so sites that we have globally,
they correspond to
roughly 11 internet egress points that are regionalized.
But there are situations where we do bring branch offices
several hundred miles or even
several thousand miles or
kilometers to an internet egress point.
And we know that that is not as efficient as it could be.
So, traffic that is originating from
corp net today is going to find an edge,
hopefully local, through a default route and then be
routed out to the internet
through what will be characterized as a security stamp.
So it has the ability to do firewalling,
intrusion detection,
data loss prevention, malware analysis,
all those great network security functions that keep
our client base safe and keep us
from leaking intellectual property.
But again, because those things
are on our on-premises edge,
our clients don't get the benefit of
those when they roam off-network.
So, they're not 100% effective in all cases.
Where we are really going in the future is something
where those network security services are
present and accessible and
useful regardless of whether you are on-prem or off-prem.
And, a situation where we're going to
move that internet edge closer to
the user even if those users are in
a far-flung branch office.
So, we are going to either move to
a local edge or we are going to move to basically an
internet or public cloud break
out that is within their country or much,
much closer to their local region.
For the things that remain back on the private network,
we are also doing the same thing with
remote access services or VPN, virtual private network.
So, we are regionalizing both the internet egress point
and ingress point if you are off network.
For things that do need to remain on the private network,
especially when we move a lot of
these branch offices to an internet default posture,
we'll be able to support dedicated tunnels
for things like facilities level infrastructure.
So, think about some of
the security systems that are in buildings
that are actually networked.
So badge readers, door locks, cameras,
those things can stay back on
the intranet and we can still make
their connectivity private through things like
secure VPN, secure IP sub-tunnels.
But, this will be a much more flexible architecture
that will allow us to add/remove sites,
scale them if we need to,
with a more optimized physical investment.
But then again, it is more
flexible because we are consuming a lot
of these services from the cloud.
Another thing that we are working on right now is
a software-defined perimeter design in Azure.
So, we went into Azure in a big way
more than five years ago prior to Azure having
a lot of the security functionality that came with
Azure resource manager and that has come into
Azure networking within the past year or two,
things like network security groups.
User-defined routes that have made
things like network virtual appliances
viable in our environment.
A lot of the rule-based access
control policy you can apply
through Azure resource manager
didn't exist when we first started.
So the security stamp that we put in
front of Azure private networking with
ExpressRoute was very oriented to a physical design.
So, we have a goal in our next design
to go to one that is physical-device-free.
So, the Azure software defined
perimeter design that maps to a lot of the guidance
that has come out of the Azure networking team
that matches up to what
the consultants on the Microsoft side
and our partners are advocating,
really is using a model
that takes full advantage of some of
those new Azure networking capabilities and
a lot of partner offerings in
the ecosystem around
virtual appliances and attached services.
So, we are going to be using
virtual firewalls running in Azure IaaS VMs.
We are going to be using attached services to do things
like malware analysis and intrusion detection.
And again, moving to
a design that is much more agile than what we have
today and requires little or no physical investment
to get into a new region or to scale up and down.
This is going to increase our efficiency,
reduce our costs and give us
a better return going into Azure.
Some of the takeaways from this section,
again, is we're moving
the client connectivity to wireless.
So, our user base has already
voted with their feet in their pocketbook.
They're using their phones to be productive,
tablets, laptops, portable machines.
We are even moving some of our fixed desktops to
be wireless in a lot of our buildings.
Moving away from physical-based networking,
physical devices to software-based networking,
we are increasing our utilization of IPv6
and even going to IPv6-only networks in some cases.
And, we are trying to really deliver
the enterprise experience
across Microsoft via wireless network and on the internet.
Some of the cautions that we found is,
we are old longtime IT guys,
so we always tend to look
for the thing that is familiar, right?
The thing that looks like,
we're confident it's going to work.
And we like to try to
make things equivalent to what we already have today,
but that does not always work in a cloud.
So, choosing commonality or
parity may not be the best solution for you,
may not actually be what you need going forward.
So, keep an open mind.
The other thing, too, is don't try to
seek perfection in your new solution.
So, everybody thinks that they want to go
for the biggest, the best.
We've also seen some tendency that people think they can
build something that is better than
what the providers build.
That may not be the right thing to do.
So, look at it from the lens
of is it good enough for what I am trying to do with it,
and does it make sense from
both a combination of
business and technology perspective.
And if it meets that bar,
go for that rather than continuing to make a custom
and potentially expensive investment.
And now Steve, you've got a good big dose
of things from kind of
a network and infrastructure perspective.
But there's a lot to this also
that has to do with moving applications efficiently
to the cloud and being
able to leverage environments where
a combination of modern applications
and legacy applications in a way,
again, that it gives us the quality of
service and security posture that we actually need.
>> Yeah.
So, we're talking about
a major paradigm change here, right?
So, moving from decades of being
on-premise to a new cloud paradigm
involves a lot of thinking.
There's the network elements you're
bringing up, there's security.
Clearly, it's changing the entire paradigm of security.
And what we've been working here at
Microsoft on is how do we
use Azure out-of-the-box services,
both premium and standard services,
to enable a key concept here,
and I want to level-set this real quick.
So, you can always acquire a new SaaS solution.
You could build new solutions on PaaS itself.
But the single biggest encumbrance
you're going to find out at
most enterprises is not
the building or the acquiring of the new tool,
but it's the integration or accessibility to get to
the existing tools and applications in your ecosystem.
Those applications are probably on your domain,
they're either going to be on-premise,
you may have lifted and shifted them into IaaS,
but in either case, they're in your domain.
And, getting access to those applications can be
a major encumbrance to
your actual move to the cloud itself,
and that's because of network security.
And so, what we want to talk to you
about here is how we can
leverage some Azure technologies
to help in that connectivity.
And really, it's a huge opportunity
and I want you to think about it as a way to
accelerate getting cloud-first by
thinking through this little bit.
So, our goal is to help
you accelerate PaaS implementation
by providing enterprise services
that allow for a high confidentiality,
we're talking the highest level of
personal information and level of security,
get that data accessible on
the cloud for other cloud
applications to be built and use those.
So the key here is,
how do we reduce the barrier of
entry for net new applications in
Azure knowing that they have to
get access to your existing legacy portfolio.
So with that in mind, what we
want to outline to you is a kind of a paradigm shift.
It's a conversation around
thinking outside-in or inside-out.
The traditional app is
an inside-out thinking and that is,
I've been on-premise in the past,
I've shifted to IaaS today,
and some time, maybe FY18,
maybe FY19, sometime in the next few years,
I plan to be native Azure myself.
That's an inside-out view where you're trying to
shift your whole self to the cloud.
And that could be a large re-architecture.
It could be a lot of work.
What we want to introduce is the fact that,
while you're in that journey from IaaS to PaaS,
you're actually an encumbrance to all the rest of
the organization that wants to
access your services from PaaS.
So what you'd like to do is
consider the fact that you could
use what we're calling an enterprise hybrid connectivity.
Oh, wait. Not to be confused with the hybrid connector.
There's so many words we can use
and brands are complicated here.
But, how do we get the connectivity in place?
And what we would like to show
is how do you get your application that is
currently in a domain that's
secured and not accessible by the Internet,
how to get endpoints on PaaS for
easy consumption by net new PaaS solutions.
By doing that, you're actually
accelerating the ability for your overall organization to
go cloud-first by enabling others in
your organization instead of being the encumbrance.
So that in mind, let's talk about
a couple of options that we have here.
So we use the acronym
EHC for Enterprise Hybrid Connectivity,
and there's two architectural approaches
that we use ourselves inside Microsoft for this.
One is what we call the IaaS compute,
and that is where you already have mature
web services that are on
IaaS that are already industrial.
They're, not redundant.
They're durable.
They have all the utilities that you want.
And you just want to rapidly exposed endpoints for PaaS.
And in that example, what we have here is,
we use Azure API manager in a VNET to work
through another VNET that has
a gateway holding a express route,
and these are premium services
that allows for a high level of security,
that express route that will
drill back into your domains.
And I'll talk about a little bit more.
But, we currently drill into a DMZ, a demilitarized zone,
at which point we get an exception
in the domain we want to
call to the exact IP and port.
By doing that, we actually are
re-using the IaaS service that you build out.
But you're standing up a new API and HTTP rest,
HTTPS rest service on Azure natively.
And so, on the left side over
there, why would you pick that?
Well, you already have durable
and enterprise class services
you want to just rapidly expose.
If by doing that,
you are admittedly using your IaaS compute for
the work but it has been
exposed on PaaS as if it were there.
Performance has been very good.
We're very pleased with how that's working.
Another option is our PaaS compute.
And this is an example you might
exercise where you literally want to deprecate
that IaaS web service or on-premise web service
in favor of doing that compute in PaaS.
And in this example, we then use API manager.
It can be the premium or standard SKU.
It's going to call into a VNET that has an,
an ASE, an Application Service Environment,
that actually now, that's where you build your web APIs,
in the form of API apps.
And those API apps are going to then call
through that same express route and
access the direct API or
SQL database or the technology directly from the ASE.
Now that's a PaaS compute
because your new development for
actually doing the accessing of
the APIs is going to be built in PaaS.
You primarily pick that because you want
to deprecate your IaaS footprint.
And this is actually getting you
closer to your PaaS story.
Notice with both of these though,
that the actual primary execution is still going
to be on your services on your domain.
So this allows, for the picture we showed earlier,
you can accelerate your endpoints on PaaS
for consumption and then follow up
with your own remainder of
your migration to PaaS and
any kind of cadence or schedule you would like.
The key is to accelerate
the experience for other applications
that need to consume your data or service.
Now, the next two slides I want
to do a high level overview on,
is just a little bit more technical information
for those of you that are inclined,
and what we're showing here is
the IaaS compute architecture where
we use traffic manager an Azure service
that allows you to do basically round-robinning
or basically redundancy across
two different data centers in Azure.
We use API managers I called out.
We have a subnet. We have a ExpressRoute and a firewall.
And you can see on the screen here,
we actually also call out how we do
our basic telemetry and
using technologies like OMS and so forth.
And this also shows you where we are
indeed using a demilitarized zone,
for the actual drill from PaaS
into our domain, and then from there,
we get an exception into
the actual business domain that we're drilling into.
This allows for it to be highly secure,
and because we're using premium services such as
ExpressRoute or the Application Service Environment,
you can really control the internet accessibility
to those components because the premium nature of them.
That's for the IaaS compute.
The next shot here
is what would it look like for the PaaS compute.
So in this case, instead of calling
into the web service on IaaS,
we're actually going to create API apps
inside of an Application Service Environment
and the VNET on PaaS.
So it's very similar. It's just where do you do
your web service compute,
is it in IaaS or is it in PaaS?
They're both viable and it really is going to be
dictated about what's the
most viable for you and your roadmap.
The key for both of these is
to just recognize the paradigm
that this isn't actually
moving your whole app on to PaaS,
but it is accelerating
your endpoints on PaaS for consumption by other apps.
You should still have a strategy or a roadmap that
would eventually take your application
to PaaS if it makes sense.
You got to take into account the lifespan of the app,
when it might sunset, what
are new technologies coming out.
But it's a way to accelerate the overall organization.
Dave, I realize you
went over a lot of network technologies
and how we have to shift to the wireless view.
But another encumbrance as far as network is,
how do you take your apps on
your domains network and get them
into PaaS or internet accessibility.
And this was what we thought was a
good segue way to talk about that.
These two strategies, I think Dave could to help a lot
of organizations towards their own cloud first strategy.
>> Yeah, I would agree. If I were to characterize what
you built in the EHC as
a way to take advantage of centralized networking and
security functions and not have to
duplicate it across thousands of applications we had.
And do it a few times with
those big data anchors and
expose them in a very consistent way.
So, again, we get the right service level of
security by taking advantage of things like ExpressRoute.
>> Right.
>> And the permissioning
that we're all used to in the enterprise is
very rooted in domain identities, domain credentials.
>> Yeah.
And I think that it's really key to point out two things.
I might not have emphasized enough.
What you're seeing here is really Azure out of the box.
We've got it working with zero code.
Yeah, there's configuration.
It's Azure, you have to configure it.
But if you have the IaaS service,
you don't really literally have to
do a stitch of code to make this work.
It's just pulling together the various Azure services.
Again, if you want to do PaaS compute,
then that probably entails some or
a little development that's on your roadmap, probably.
I think the key that you point out there is this.
Depending on your organization,
this may be a centralized service
that multiple teams share,
or it may be broken out by a line of
business or by whatever
enterprise you want to do with it.
That economy as scale is
something that you have to consider.
This architecture is very adaptable for
having economies of scale because
things like the API manager,
and ASE, you're talking about a lot of
scale ability in those services.
They could actually literally create
an opportunity for centralization.
But that's something your organization has to consider in
terms of co-hosting into those environments.
Role-based security options are
making that more viable every day.
>> Yes. But I wouldn't characterize
centralization as a single instance of this.
>> Not necessarily.
>> Multiple instances of this, as long as
we can actually essentially
manage things like policy and security posture,
or we distribute out a service
like this to multiple business units.
>> That's a great way to put it.
And we're actually looking at
coming up with automated means for actually
assessing the infrastructural configuration
to confirm you are compliant with the strategy.
The key is, we actually spent many,
many months working with our security team here in
Microsoft to do penetration testing and to do
different security reviews to confirm that this was
a very strong and durable approach
all the way up to highly confidential data.
And that's what's allowing
us to be successful here at Microsoft.
>> Thank you, Steve. There are a lot of
resources that are published out
on the IT Showcase website,
and other locations that you'll see here,
that give you
a good idea of some of the other things that we've
been doing around
network topology for modern applications,
how we use ExpressRoute in the enterprise,
there is a ExpressRoute home page if you like to actually
dig in and understand more about
the technology that the Azure folks host,
and some documentations on how to
make security a priority when you're actually
moving applications and data in the cloud.
So, things do change.
A lot of your principles will still hold true,
but how you actually execute and
satisfy those principles are
different as you move into the cloud.
And that brings us to the end of the presentation,
and we're going to move into question and answer now.
Please remember to type your questions
into the question-and-answer window
and we'll read them out loud before
we answer them. Thank you.
>> Thanks.
>> Before we move into
a live question-and-answer session,
I want to take a minute to introduce Veerash,
who is joining us today to
answer any technical questions you may have.
>> Thank you, David.
So, hey, everyone. I am Veerash.
I'm an architect in the Enterprise
Platform Services team in Microsoft.
I'm currently working on designing
the EHC reference patterns,
some of which Steve has talked about.
So we currently offer EHC as
a platform for teams within Microsoft.
So I'm here to answer any
specific questions you have with regard to
the EHC patterns and the platform itself. Thank you.
>> Thank you for that, Veerash.
Now let's go ahead and jump right into questions.
The first one is from Eric.
His question was, "How do you
distinguish between managed and unmanaged clients?"
I'll take that one.
Managed clients to us really mean that we
exert some sort of
administrative control over the client.
So, the clients have maybe a Windows PC that is joined to
the Windows Server Active Directory and
managed by System Center.
Now, also we consider clients that are joined to
Azure Active Directory and managed by
Intune to be managed as well.
Unmanaged clients would be the opposite of that.
Something that is on our network that is leveraging
our resources that we don't exert
any administrative control over.
And that is a common scenario in
our environment now that we are very heterogeneous.
We have Windows, we have iOS,
we have Linux, we have
everything under the sun in our environment.
Now we have nontraditional
clients that touch our network as well.
Internet of Things types of devices.
So making that distinction between something, again,
we have some administrative control
over versus something that we don't.
The next question is about EHC platform itself.
So for EHC, what are the primary benefits of platform
as a service-compute option over
infrastructure as a service?
>> So basically, platform
as a service allows you to get off of any of your
traditional on domain or even IaaS hardware.
So you could basically get off of the problem
of having to maintain scale, the physical assets.
You also have to recognize that it's possibly part of
a larger strategy you have
around cloud-first, for example.
So, a lot of teams have migrated from on-premise to IaaS,
and then eventually you want to
go in the PaaS to get those benefits
of hiding that infrastructure element from your solution.
That's what that's for.
That's part of it. Anything you want to add, Veerash?
>> No, I think that's good.
>> Okay.
>> Next one,
looks like another one for me. Also from Eric.
For an attribute-based security policy,
what kind of attributes are you looking for and
what tools are you using to create for those attributes?
So, an attribute-based security policy
rather than one that is just
simply based upon source and destination,
where are you coming from and where are you going to,
would be things like user identity.
Who are you? Your device identity.
What type of device or is there
an identity of that device that we recognize?
It could also be application-based.
So, can we profile and
understand what application you're using,
does the network traffic that we're
seeing look legitimate for that type of application?
And based upon those more granular things,
we can actually make a good decision
on what access network do we land you on,
so whether you're wired wireless or remote.
We can also put you on
a particular logical network segment
and isolate you away from
other parts of the network if we need to.
We can also apply policy at our edges.
So when you exit our network we can give you
a different policy depending
upon whether you're managed, unmanaged,
what type of device you're using
and also what destination you're going to,
or what application you're using.
And this gives us a way
to provide what I characterize as kind of
differentiated security policy that is
very scenario-specific.
Where in the past, we made
these very broad categories
and sometimes only one category
that we found at times opened us up to
too much risk and at
other times it constrained what the business can do.
So we have a lot of tools now to do things like 802.1X,
and on-wired/wireless, we have
conditional access on remote.
We use next generation firewall to
provide controls when you're going
from network zone to network zone
or environment to environment,
and then also egressing our network.
Another question for EHC.
So, what if your back end application
never itself migrates to platform as a service.
Are there implications in that constraint?
>> That's a really good question.
Basically, when it boils down to you're trying to provide
connectivity for that application
on the cloud or internet.
So you could actually look at,
in a case where an application may
never have a roadmap to get on to native PaaS,
then this basically is
a permanent capability or
a connecting layer for that application.
At some point, it will be sunsetted
in favor of something new.
I'm sure that will be cloud.
But, I think it is
an expected pattern that you could have
applications that will never
themselves be natively on Azure,
and that makes the EHC thing not just a temporary stopgap
but the permanent solution for that app.
>> One more for EHC here.
So, are you actually stuck between choosing PaaS or IaaS?
Is it one or the other?
>> Yeah. That's a good question, too.
So yeah, while we illustrated
two major patterns you can do
an IaaS compute or a PaaS compute based upon your needs.
The fact of the matter is,
is that you could actually choose
that endpoint by endpoint.
So you could implement both and it could
be varied based upon each service's need.
Now, I will make one caveat in,
you would need to make sure you use
the premium APIM SKU to be able to do both.
So for the Paas pattern,
you could use the standard APIM SKU,
but you couldn't for the IaaS compute.
So that's the one thing you could take into account.
So if you implemented premium APIM SKU in a VNET
that connects through the service ExpressRoute,
you can go either way.
It's just whether or not you go through the ASE or
directly to the API on the IaaS layer.
>> Good. One more.
It looks like it's more oriented
toward core networking and security.
What sorts of services and workloads are
going to remain private in on-premises on Microsoft,
and does this conflict with
the things that we talk about in terms of cloud-first,
mobile-first and some of
our goals to move a lot of our services to the internet?
There are always going to be things that have to remain
on-premises or private to be able to support
the enterprise intranet by definition.
We have to be able to address our clients,
we have to be able to locate things.
And there are things in the Microsoft enterprise
that are going to remain private because we
do build the products and services that support
the public cloud services that we offer.
But this is the vast minority
of what we run particularly in the enterprise.
So we're talking single digits
compared small percentages of
things that used to be completely private in
the past will be remaining on-premises.
And there is a lot of opportunity in moving
the vast majority to
cloud and being able to present that on
the internet to be able to get to
this future state where you can,
within reason, connect to anything from anywhere and be
productive anywhere, anytime from anything.
So, core infrastructure will be there.
The things that really by definition, again,
need to be private or that have
no business need to be presented on the intranet side,
and there's not necessarily any
value in putting in public cloud,
those are the only things that will stay behind.
And that doesn't bother me very much.
So, it's so vanishingly
small when you look at the size of
the Microsoft enterprise that
the opportunity moving things to public cloud is huge.
>> Another question for EHC.
So, what advantages does EHC have
over other hybrid connectivity options?
>> That's a good question.
It's an evolving story by the way.
Now, what we've experienced here inside Microsoft
using Azure, obviously we're Azure-based.
You see there are a variety of
technologies that can meet this need.
But a lot of those are based upon having to install
a local agent on your domain site or
having to do yet another level of WCF
service wrapping or web service wrapping around it.
Those things are eliminated basically by using
PaaS based solutions and
ExpressRoute that will drill right into,
what we go through again the DMZ
with an exception to the domain.
Not meaning that we are literally drilling
in from the outside
as opposed to a lot of them are agent based,
where you have an agent on
the domain that opens the pipe from the inside out.
Veerash, do you want to add to that?
>> Yes. So, the other aspect of it is basically,
you are connecting to your backend service
directly instead of
going through the push-pull model that
other hybrid connectivity options provide.
So, this will give you a lot of
performance improvement as well as security
improvements because you are directly
connecting to the on-premise endpoint.
So, that's another advantage
that EHC provides compared to other hybrid connectors.
>> Yes. And in
case you're not familiar,
I think that's a very good point Veerash, thanks,
that the ExpressRoute is a dedicated network line.
It's like the old days where you had a T-1 or T-3 line.
It's owned by you, so effectively when you
use the EHC architecture
as we are showing here and you
implement something in ASE,
the Application Service Environment,
that's dedicated hardware going through
a dedicated network cable to your domain.
You could almost look at that as you're just basically
extending your local hardware footprint.
It is not really internet exposed.
>> Yes, that's right.
>> Is it also fair to say
that if you want that direct connectivity,
you don't want the push-pull model
or the message bus model.
That using something like EHC allows you to get
economies of scale and centralization
where you could use
ExpressRoute for every single application
that you wanted to,
or a private connection that you would have to
build that independently for every single application.
>> Yes, that's right. And then that gives
obviously the manageability aspect as well as it eases
that point where the number
of [inaudible] we are opening through the Corpnet can
be centrally managed and can be
audited at a common location as well.
>> It ties into the earlier question about managed.
The whole EHC architecture is very managed and we have
the necessary abilities for our own security team
to do auditing and looking for penetration alerts,
all kinds of stuff.
It's built into it. Very secure.
>> Good.
Looks like one for me here.
Is your future state a completely parameter less network?
And if so, where does
all that security monitoring and policy go?
I'll answer the first one first.
Is it completely parameter less?
No. Does a lot of the parameter that
we've created and recognized in the past,
in the legacy environment,
remain? The answer to that is no.
So, a lot of the traditional network parameters
are already gone or going away.
And we're looking at different ways to create a parameter or
different places to be able to apply these policies.
So, where we used to have
very defined control over topology and network paths,
in a place where I can actually point to and say
"This is a chokepoint that I can establish
a policy on or do security monitoring on".
That may not exist when we move things into the cloud.
We have connectivities happening off of our network.
We have connectivities happening between parts
of applications that is
completely internal and opaque to us.
We don't see it at all.
So, there is still a place for network based controls.
That network based control may be
on part of the network proper and may be on an edge,
and maybe you know an attribute or provided
by the instance itself.
Where the instance of the application has
an access control list that
defines what it will and will not listen to.
There's also other ways to provide
access control that are beyond just network controls.
Right, strong identity, strong off.
things like that that we are looking at
across the different types
of solutions we have out there.
So, that security monitoring again,
and policy may go into the application itself,
it may become identity based rather than network based.
It may be on the instance, on the edge,
on the virtual network even if you are in Azure now
with network security groups.
We also have an emerging network
virtual appliance and network function
virtualization ecosystem
in Azure that we are looking at now.
So, it used to be in
the past that we actually had to haul
traffic out of the cloud
to get control and get monitoring on it.
That is changing as well.
So, again you can preserve some of
the traditional parameter networking types of
controls but they may not apply
in every single situation.
And you know you have to be willing to look at
potentially other alternatives and
peel back your requirements.
What are your threats and risks?
And what are the capabilities you need to mitigate those?
And potentially look at new and novel ways of doing that.
One more for EHC here.
So, what about outbound needs
from a private domain to Azure to the Internet.
So, do you manage outbound connectivity as well?
>> Good question. To be honest,
if your domain already has internet connectivity,
there is no need to use EHC
for the majority of the cases.
It is a different question if your domain does not,
then I am sure you have to worry
about how you do that today.
It's overall, I will state that ExpressRoute
can be used in conjunction with
an application service environment, ILB,
to do tunneling out,
but you would have to look into
what special cases that would
work for you and why that would be the case.
>> So, there are some
some tenants that were requesting to move
their IaaS compute onto PaaS but just keep it
internally load balanced like
as if it is connected to corp.
So, basically they wanted to run corp services on PaaS.
So, it's for those sort of tenants this is
something that would be definitely useful.
So, where you maintain it behind
an internal load balancer and
then move all your computers to PaaS.
>> And actually that is really exciting Veerash,
thanks for bringing it up.
Because what that's pointed out is they're
actually allowing you to natively
expand out an application that's
not on Azure PaaS with PaaS by doing that.
So, by using an express route through A.S.C., I.O.B.
Which is an internal load bouncer.
You basically,
because the ASE again is dedicated hardware,
and you're going through a dedicated network wire
back to your domain,
you're effectively just expanding
your on-premises or IaaS.
solution into PaaS but it has otherwise no accessibility to the internet.
>> So, I want to add just one more thing.
So, apart from these tenants, the product teams
are benefiting a lot from these requirements.
So basically it's not
just the App service environment now.
We are working with
the service fabric team as well to enable this sort of
internal load balancing feature
within the VNET so that they can set
up these clusters inside the VNET as well.
>> Right. And you did mention
that ASE is application service environment.
So, I know we have used the acronyms before but that's
a web app logic app mobile application bundled
up into basically a service that you deploy as a unit.
>> That's correct.
>> And ASE is a premium service so it's a way to get
dedicated for both performance and security wise.
>> Right.
>> Another one for EHC.
Are there actually primary benefits of going down
the platform as a service route
over infrastructure as a service?
>> I think we actually addressed that one earlier.
>> Did we already address that one?
I think we had one that was said,
"Are you stuck between choosing one or the other?".
>> And we talked about that too,
then you can go API, end point by point.
I think we tapped them out.
>> Okay. One more on.
It looks like core network and security.
Are there manageability and security differences
as you move clients and their resources to the Internet?
The answer to that is yes.
And those resources could
be applications that could be data,
that could be services.
We have traditionally built a lot of
our enterprise manageability
and security capabilities around
this notion that we are on
a local area network
and you know because we are on
a local area network behind
an edge and it was all private.
We can assume that we were
administrators of all these things.
So, having changed you know,
how we manage these systems and how we
apply security policy again
as these things move from to the public side.
And they're not on networks that we control.
They may not rely on connectivity back to
our corporate network so we can't force them back
in just to get these capabilities.
So, we're trying to shift
away from things that are locked on
our private network dependent upon
being on premises to things that work when you're on.
You know, when you're on the public internet and are
not dependent upon any type of physical location.
So, you know, if we
need to manage and secure your system if you're
at home or in a hotel or on
a customer site where
we're using tools now that enable us to do that.
Where in the past, you know
we couldn't do that once you're off prem,
unless you're actually remote accessed in,
we couldn't see your system.
So, now moving to things like Azure Active Directory and
Microsoft Intune which are
available you know from the internet site.
And which are kind of the you
know client base subscription model.
Or if you attach to, you know,
your tenancy in those services, you become managed.
It's different than it has been in the past.
I wouldn't say that they're absolutely equal.
They are significantly different in some cases.
But, you know, we are migrating in that direction.
So, one of the base assumptions that we're making we are
challenging ourselves with is
to assume all the clients are mobile.
So, you know start from a wireless internet connection,
assume everything is mobile and what doesn't work.
And the things that don't
work that we still need to preserve.
We need to go out and find a new solution for.
>> I believe that's all the questions that we have.
>> So, this is
going to wrap up our live question and answer session.
If you haven't already done so please take a minute
and answer the poll question
that you see in the Skype window.
We'd like to thank the audience for joining us
today and I'd like to remind you
also that IT showcase has live webinars are
scheduled every month on a variety of topics.
We hope you'll join us again
and bring your colleagues and friends.
You can find our upcoming webinars schedule
as well as the on-demand webinars and
other content at microsoft.com/itshowcase. Thank you.
Không có nhận xét nào:
Đăng nhận xét