Author: Tommy Kneetz Page 1 of 2

about Tommy

Tommy Kneetz

Tommy is working as a Senior Cloud Consultant with more than 20 years of experience. Since 10 years his focus is on Microsoft Azure especially on Azure Governance, Azure Network, Azure Security, Azure IaaS, PaaS and my favorite Azure Virtual Desktop.

Tommy is Co-Founder and organizer of “Azure Meetup Schwerin” – a Germany based Azure Community.

RDP ShortPath for Azure Virtual Desktop

What is RDP Shortpath?

Remote Desktop Protocol (RDP) by default uses a TCP-based reverse connect transport as it provides the best compatibility with various networking configurations and has a high success rate for establishing RDP connections. However, if RDP Shortpath can be used instead, this UDP-based transport offers better connection reliability and more consistent latency.

Shortpath over Public networks Overview

Overview

UDP is enabeled by default. Client and AVD Host must be allowed to use udp.

Network configuration

Details can be found here: https://learn.microsoft.com/en-us/azure/virtual-desktop/rdp-shortpath?tabs=public-networks#session-host-virtual-network#

Session host virtual network

NameSourceSource PortDestinationDestination PortProtocolAction
RDP Shortpath Server EndpointVM subnetAnyAny1024-65535
(default 49152-65535)
UDPAllow
STUN/TURN UDPVM subnetAny20.202.0.0/163478UDPAllow
STUN/TURN TCPVM subnetAny20.202.0.0/16443TCPAllow

Client network

NameSourceSource PortDestinationDestination PortProtocolAction
RDP Shortpath Server EndpointClient networkAnyPublic IP addresses assigned to NAT Gateway or Azure Firewall (provided by the STUN endpoint)1024-65535
(default 49152-65535)
UDPAllow
STUN/TURN UDPClient networkAny20.202.0.0/163478UDPAllow
STUN/TURN TCPClient networkAny20.202.0.0/16443TCPAllow

Result

Shortpath Managed networks

For managed networks you can also establish an direct connection from your enddevice to the session host via Express Route or a Site-2-Site VPN.

To enable this you need to do the following:

  1. enable shortpath on session host

admx files can be downloaded here: https://aka.ms/avdgpo

  1. Windows Firewall – allow port 3390
New-NetFirewallRule -DisplayName 'Remote Desktop - RDP Shortpath (UDP-In)' -Action Allow -Description 'Inbound rule for the Remote Desktop service to allow RDP Shortpath traffic. [UDP 3390]' -Group '@FirewallAPI.dll,-28752' -Name 'RemoteDesktop-UserMode-In-RDPShortpath-UDP' -PolicyStore PersistentStore -Profile Domain, Private -Service TermService -Protocol UDP -LocalPort 3390 -Program '%SystemRoot%\system32\svchost.exe' -Enabled:True
  1. Set group policy setting on clients

Result

Deploy Azure Virtual Desktop (AVD) on Azure Stack HCI

This post will be a short guide through all steps without HCI Setup.

Requirements

After you have deployed your HCI Cluster successfully you will have your cluster resources within the azure portal. Once you click on that cluster you will find the following overview.

As you can see as well all prerequisites are met. These prerequisites are:

Deployment

Now you can click on “DEPLOY” to start a custom deployment:

most of the informations are clear, but these 3 were a bit tricky for me 😉

LOCATION

The location you will find within your Azure ARC resources. (Azure Portal > Azure Arc > Custom Location > Properties > ID

IMAGE

To finde the Image id it is required to add at least one image to azure stack.

You have three options to add an image:

The easiest way to get started is to add an azure marketplace image. I have already added “Windows 11” and “Windows Server” to my list. After adding an image go to azure portal > azure stack hci > vm image > “windows11” now copy the url from your browser – that must look like this:

https://portal.azure.com/#DOMAIN/resource/subscriptions/SUB_ID/resourceGroups/RG-NAME/providers/microsoft.azurestackhci/marketplaceGalleryImages/IMAGE-NAME/overview

remove /overview at the end und copy that url to your custom deployment.

NETWORK

Go to Azure Stack HCI > Your HCI Stack > virtual networks > and copy the browser URL that must look like this: https://portal.azure.com/#@DOMAIN/resource/subscriptions/SUB_ID/resourceGroups/RG-NAME/providers/Microsoft.AzureStackHCI/clusters/CLUSTERNAME/virtualnetworks

and add your virtual network name to the end like this:

https://portal.azure.com/#@DOMAIN/resource/subscriptions/SUB_ID/resourceGroups/RG-NAME/providers/Microsoft.AzureStackHCI/clusters/CLUSTERNAME/virtualnetworks/NETWORKNAME

Issues during deployment

My first deployments failed and I wasn’t sure why. After I checked the deployments within my resource group and checked my inputs to the last failed one

I found that my VM tries to get access to the following URL. That was blocked so I copied that script and created my own https url as a workarround.

To change that URL only “redeploy” of one of the last deployments gives you the option to change that URL

After Deployment

After that deployment I had my VM up and running on my azure stack hci. It was domain joined but the avd agent was mising. I installed that avd agent manually. Now I was able to see that host within the azure portal.

Successful Connection

and here we go I was able to get a connection

FSLogix Profile Container with Azure Files and AzureAD

I have already written a post about how to deploy AVD with AzureAD joined VMs
(https://tech-guys.blog/2021/11/10/azure-virtual-desktop-and-azuread-joined-vm/)

Addition to AzureAD joined VMs we need an Azure Storage Account, an Azure File share as well to save our FSLogix Profiles. This post will guide you to all steps.

At time of writing AzureFiles and AzureAD Kerberos functionality is still in preview (https://docs.microsoft.com/en-us/azure/virtual-desktop/create-profile-container-azure-ad)

Requirements:

  • User must be a hybrid user identities!!!!!!
  1. you must have an Azure Subscription and ADConnect already installed and configured
  2. create a storage account
    I used the following settings and leaved all other settings by default…

3. create azure file share

for my lab I only use SKU standard but for prod environments I always use and recommend premium storage. Microsoft created insights on sizing and designing fslogix solutions for enterprises here: https://docs.microsoft.com/en-us/azure/architecture/example-scenario/wvd/windows-virtual-desktop-fslogix

4. enable Azure AD authorization

Connect-AzAccount -Tenant $tenantId -SubscriptionId $subscriptionId
$Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version=2021-04-01' -f $subscriptionId, $resourceGroupName, $storageAccountName);
$json = @{properties=@{azureFilesIdentityBasedAuthentication=@{directoryServiceOptions="AADKERB"}}};
$json = $json | ConvertTo-Json -Depth 99
$token = $(Get-AzAccessToken).Token
$headers = @{ Authorization="Bearer $token" }
try {
Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method PATCH -Headers $Headers -Body $json;
} catch {
Write-Host $_.Exception.ToString()
Write-Error -Message "Caught exception setting Storage Account directoryServiceOptions=AADKERB: $_" -ErrorAction Stop
}
New-AzStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName -KeyName kerb1 -ErrorAction Stop

5. create an Azure AD application that represent the storage account (during preview only via Powershell)

Install-Module -Name Az.StorageInstall-Module -Name AzureAD
$tenantid = "yourid"
$subscriptionid = "yourid"
$resourceGroupName = "rg"
$storageAccountName = "name"
#set password between storage account and azuread
$kerbKey1 = Get-AzStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName -ListKerbKey | Where-Object { $_.KeyName -like "kerb1" }
$aadPasswordBuffer = [System.Linq.Enumerable]::Take([System.Convert]::FromBase64String($kerbKey1.Value), 32);
$password = "kk:" + [System.Convert]::ToBase64String($aadPasswordBuffer);
#connect to tennant

Connect-AzureAD
Set-AzContext -Subscription $subscriptionid
$azureAdTenantDetail = Get-AzureADTenantDetail;
$azureAdTenantId = $azureAdTenantDetail.ObjectId
$azureAdPrimaryDomain = ($azureAdTenantDetail.VerifiedDomains | Where-Object {$_._Default -eq $true}).Name

#generate service principal names

$servicePrincipalNames = New-Object string[] 3
$servicePrincipalNames[0] = 'HTTP/{0}.file.core.windows.net' -f $storageAccountName
$servicePrincipalNames[1] = 'CIFS/{0}.file.core.windows.net' -f $storageAccountName
$servicePrincipalNames[2] = 'HOST/{0}.file.core.windows.net' -f $storageAccountName

#create app

$application = New-AzureADApplication -DisplayName $storageAccountName -IdentifierUris $servicePrincipalNames -GroupMembershipClaims "All";

#create service

$servicePrincipal = New-AzureADServicePrincipal -AccountEnabled $true -AppId $application.AppId -ServicePrincipalType "Application";

$Token = ([Microsoft.Open.Azure.AD.CommonLibrary.AzureSession]::AccessTokens['AccessToken']).AccessToken
$Uri = ('https://graph.windows.net/{0}/{1}/{2}?api-version=1.6' -f $azureAdPrimaryDomain, 'servicePrincipals', $servicePrincipal.ObjectId)
$json = @'
{
"passwordCredentials": [
{
"customKeyIdentifier": null,
"endDate": "",
"value": "",
"startDate": ""
}]
}
'@
$now = [DateTime]::UtcNow
$json = $json -replace "", $now.AddDays(-1).ToString("s")
$json = $json -replace "", $now.AddMonths(6).ToString("s")
$json = $json -replace "", $password
$Headers = @{'authorization' = "Bearer $($Token)"}
try {
Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method Patch -Headers $Headers -Body $json
Write-Host "Success: Password is set for $storageAccountName"
} catch {
Write-Host $_.Exception.ToString()
Write-Host "StatusCode: " $_.Exception.Response.StatusCode.value
Write-Host "StatusDescription: " $_.Exception.Response.StatusDescription
}

Result is an app with the name of our storage account:

and you can see on your storage account a hint that kerberos is enabled.

6. API Permissions

now you can assign API permissions to that app:

1st we need openid permissions

  • openid – Sign user in
  • profile – View users basic profile

2nd we need User permissions

  • User.Read – Sign in and read user profile

after adding it is important to click “Grant admin consent”. For this you need right permissions within AzureAD.

7. Assign SMB Permissions

To allow an User to create profile container we need to assign RBAC Roles to the Storage account.

Storage File Data SMB Share Contributor for all AVD Users and “Elevated Contributor” to a Group or Users that need Admin Permission.

8. Assign directory level access permissions

For me it was quiet confusing where I have to set these permisions and which system I have to use. In my Lab I have one Windows VM with AD DS and ADConnect installed that I used to run that scripts.

From MS Documentation I found:

The system you use to configure the permissions must meet the following requirements:

  • The version of Windows meets the supported OS requirements defined in the Prerequisites section.
  • Is Azure AD-joined or Hybrid Azure AD-joined to the same Azure AD tenant as the storage account.
  • Has line-of-sight to the domain controller.
  • Is domain-joined to your Active Directory (Windows Explorer method only).
Connect-AzAccount -Tenant $tenantId -SubscriptionId $subscriptionId
$AdModule = Get-Module ActiveDirectory;
if ($null -eq $AdModule) {
Write-Error "Please install and/or import the ActiveDirectory PowerShell module." -ErrorAction Stop;
}
$domainInformation = Get-ADDomain
$Domain = $domainInformation.DnsRoot
$domainGuid = $domainInformation.ObjectGUID.ToString()
$domainName = $domainInformation.DnsRoot
$domainSid = $domainInformation.DomainSID.Value
$forestName = $domainInformation.Forest
$netBiosDomainName = $domainInformation.DnsRoot
$azureStorageSid = $domainSid + "-123454321";
Write-Verbose "Setting AD properties on $storageAccountName in $resourceGroupName : EnableActiveDirectoryDomainServicesForFile=$true, ActiveDirectoryDomainName=$domainName,
ActiveDirectoryNetBiosDomainName=$netBiosDomainName, ActiveDirectoryForestName=$($domainInformation.Forest) ActiveDirectoryDomainGuid=$domainGuid, ActiveDirectoryDomainSid=$domainSid,
ActiveDirectoryAzureStorageSid=$azureStorageSid"
$Uri = ('https://management.azure.com/subscriptions/{0}/resourceGroups/{1}/providers/Microsoft.Storage/storageAccounts/{2}?api-version=2021-04-01' -f $subscriptionId, $resourceGroupName, $storageAccountName);
$json=
@{
properties=
@{azureFilesIdentityBasedAuthentication=
@{directoryServiceOptions="AADKERB";
activeDirectoryProperties=@{domainName="$($domainName)";
netBiosDomainName="$($netBiosDomainName)";
forestName="$($forestName)";
domainGuid="$($domainGuid)";
domainSid="$($domainSid)";
azureStorageSid="$($azureStorageSid)"}
}
}
};
$json = $json | ConvertTo-Json -Depth 99
$token = $(Get-AzAccessToken).Token
$headers = @{ Authorization="Bearer $token" }
try {
Invoke-RestMethod -Uri $Uri -ContentType 'application/json' -Method PATCH -Headers $Headers -Body $json
} catch {
Write-Host $_.Exception.ToString()
Write-Host "Error setting Storage Account AD properties. StatusCode:" $_.Exception.Response.StatusCode.value__
Write-Host "Error setting Storage Account AD properties. StatusDescription:" $_.Exception.Response.StatusDescription
Write-Error -Message "Caught exception setting Storage Account AD properties: $_" -ErrorAction Stop
}

9. Enable AzureAD functionality via Registry

reg add HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters /v CloudKerberosTicketRetrievalEnabled /t REG_DWORD /d 1

10. Configure SessionHost

On my Sessionhost I configured FSLogix like this:

Because I used that host already with my test account I already had a local profile. Within my first test it was not working, but with allowing FSLogix to delete lcoal profile than I was able to login and my profile container was created.

To be honest I had some trouble to get this working. Maybe it was one of my reboot to solve this 😉

Thanks for reading

AVD “Start on Connect”

When a Virtual Machine is running we have to pay for using CPU and RAM. When we are able to turn off (deallocate) a Virtual Machine then we can save costs.

With “Start on Connect” feature we can allow end users to turn on AVD Hosts and if they log off we can deallocate our hosts again. The result can be that we save costs!

I will go through all task to enable that feature.

1. Create and Assign Custom Role

First of all we need to have a custom role that will be used to start our hosts.

Subscription > Access control > add > custom role

Now we can create a role assignment. I did this on my Ressource Group “rg-avd” where my host is located.

our next step is to look for our new created custom role. if you can not find this, please try to refresh your session.

next part is to define members – here we need to find “Windows Virtual Desktop”

If you can not see any member then your user must be assigned to the security administrator role.

2. Enable “Start on Connect” on Hostpool

Let’s try if its working. My pool only one host and this host is deallocated.

Lets start our SessionDesktop:

Our Client will wait until one host is up and connected to the avd service.

Within the activity logs we can check who initiated the start of my host.

It was initiated by “Azure Virtual Desktop” as designed .

Thats it and thx for reading

Simple Azure VM Start/Stop Chaining using only Tags, Event Grid and Azure Functions

When you are migrating VMs from on-premise to Azure, you always have to evaluate the needed availability of several VMs. Your decisions in terms of VM size, storage tiers, and pricing options do rely on this evaluation. In my current migration of on-prem Remote Desktop Services to Azure Virtual Desktop, we have a Remote App that is used quite irregularly. Sometimes once per week, sometimes one to two days, and sometimes not a single time in a week. So we will go with Pay as you Go for these needed VM’s. We can deal with this behavior easily in Azure Virtual Desktop (planned shutdown and start on connect), but that’s only the frontend VM. In my scenario, I have some additional backend VMs which hold some services needed for the running application (licenseservice and some webservices for the DMS integration). We don’t need to run the backend VMs if nobody uses the frontend application, so I want to link the running state of these VMs with each other. 

The Frontend VM will be triggered by AVD’s “Starts on Connect” feature, and the needed backend server will be automatically started and deallocated depending on the Frontend VM.

I know there are solutions using EventGrid + Logic App + Azure Automation. But as you may already know, serverless Azure Functions are simply more efficient in terms of scaling and pricing.

In my setup how-to, I decided to simplify the setup with two single VMs. It shouldn’t be hard for someone to adjust, because in the end, you only have to tag the dependent VMs with the same value.
So let’s start …

Create some VMs for testing

We created two resource groups for testing. In my example, I created one named “Lab_init” and one “Lab_triggered” ? This way, we can define which VM can trigger the start process by putting them into this resource group.

Now we create 2 VMS, one in our “Lab_init” resource group and one in our “Lab_triggered” resource group.

I’m going with Ubuntu this time, but it doesn’t really matter. We only want to start and stop, so go with whatever you prefer.

Next, we need to tag our VM’s. The value can be whatever we want, but it has to match on all VM’s that we want to trigger. The code of our function (we’ll get to this later) loop through all subscriptions and search for VM’s with the same value in the bootbinding tag.

Setup Azure Function App

Now we get to the funny part.

We will create a new Azure Function App. (Serverless tier is good enough for our needs 🙂 )
Because our Function App needs to start/stop our Azure VMs across multiple subscriptions, we need a Managed Identity.
Add the Virtual Machine Contributor role for every subscription where you place VMs which needs to be triggered.
Our Azure Function App needs some modules to do its job. We have to add these to the requirements.psd1 file.
Note: You shouldn’t add the full Az module, as it’s quite large. Only add the submodules you really need.
Now we create our function and select “Azure Event Grid trigger”!
We enter the following code for our function:
param($eventGridEvent, $TriggerMetadata)

# Make sure to pass hashtables to Out-String so they're logged correctly
# $eventGridEvent | Out-String | Write-Host

$tAction = ($eventGridEvent.data.authorization.action -split "/")[-2]
$tVmName = ($eventGridEvent.data.authorization.scope -split "/")[-1]
$tSubscriptionId = $eventGridEvent.data.subscriptionId

# preflight check
Write-Host "Check trigger action"
if (($tAction -ne "start") -and ($tAction -ne "deallocate")) {
    Write-Warning "Unsupported action: [$tAction], we stop here"
    break
}
Write-Host "##################### Triggerinformation #####################"
Write-Host "Vm: $tVmName"
Write-Host "Action: $tAction"
Write-Host "Subscription: $tSubscriptionId"

Write-Host "Get information about trigger vm"
$context = Set-AzContext -SubscriptionId $tSubscriptionId

if ($context.Subscription.Id -ne $tSubscriptionId) {
    # break if no access
    throw "Azure Function have no access to subscription with id [$tSubscriptionId], check permissions of managed identity"
}

$tVm = Get-AzVM -Name $tVmName
$bindingGroup = $tVm.Tags.bootbinding

if (!$bindingGroup) {
    Write-Warning "No tag with bootbinding found for [$tVmName], check your tagging"
    break
}

# main
Write-Host "Query all subscriptions"
$subscriptions = Get-AzSubscription

foreach ($sub in $subscriptions) {

    Write-Host "Set context to subscription [$($sub.Name)] with id [$($sub.id)]"
    $context = Set-AzContext -SubscriptionId $sub.id

    if (!$context) {

        # break if no access
        Write-Warning "Azure Function have no access to subscription with id [$tSubscriptionId], check permissions of managed identity"
        return
    }

    # get vms with bootbinding tag
    $azVMs = Get-AzVM -Status -ErrorAction SilentlyContinue |  Where-Object { ($_.Tags.bootbinding -eq $bindingGroup) -and ($_.Name -ne $tVmName) }
    if ($azVMs) {
        $azVMs | ForEach-Object {
            Write-Host "VM [$($_.Name)] is in same binding-group, perform needed action "
            $vmSplatt = @{
                Name              = $_.Name
                ResourceGroupName = $_.ResourceGroupName
                NoWait            = $true
            }
            switch ($tAction) {
                start {
                    Write-Host "Start VM"
                    $_.PowerState -ne 'VM running' ? (Start-AzVM @vmSplatt | Out-Null) : (Write-Warning "$($_.Name) is already running")
                }
                deallocate {
                    Write-Host "Stop VM"
                    $_.PowerState -ne 'VM deallocated' ? (Stop-AzVM @vmSplatt -Force | Out-Null) : (Write-Warning "$($_.Name) is already running")
                }
                Default {}
            }
        }
    }
}

Setup event grid

Thankfully, we can use an “Event Grid System Topic” for our solution, so we don’t have to code anything here. You can think of a Topic as the source, where we want to react to events that occur.
Because we want to react to events in our “Lab_init” resource group, we select Resource Groups as Types and select “Lab_init” as the resource group.
If we want to trigger something, we have to create an “Event Subscription”
First, we give our Event Subscription a name and an endpoint. The endpoint defines what we want to trigger.
We dont want to call our function on every event in the dependent resource group, so we make some adjustments to filter for specific events. Otherwise, we have unnecessary function calls and have to filter the event in your function code, which is not good practice if we really don’t need to, because there is no other solution. In the Basic section, we reduce invocations to only successfully completed events.
In the Filter section of our Event Subscription we should also add some string filtering for the subject. This helps us only trigger our function if the event is triggered by the Microsoft.Compute provider on a virtual machine.

Validate Setup

Now let’s test our configuration

We start our “initVM”
In our Topic view, we see that some events are received by our Topic and also that some events are matched by our advanced filter.
Same informations four our “Event Subscription”
And we can also check our function output.

Log into our VMs

Check initVM
Check triggeredVM

As you can see, there is most likely a time difference of 3 minutes between the boottimes, so keep that in mind. In my AVD scenario, it doesn’t really matter, because we have some buffer until the user logs in and starts the application. We never had problems with that.

Hope it can be usefull for somebody, feel free to a adjust

React to MEM Logs using Event Hubs and Azure Functions

Example: Convert User-Driven provisioned Autopilot Devices to Shared Devices

I’ve got an interesting challenge from one of my customers.
Long story short, we have hybrid ad joined devices (for no really good reason, I know 😉 ), but only “User Driven Provisioning” via Windows Autopilot is available (at the moment).
But mobile devices get shared regularly for weekend tasks, the customer wants to allow every user to use the company portal on these devices.
The only way to accomplish this at the moment is to remove the primary user from the device in the MEM Admin Portal, because this will “convert” the device into a shared device.
So he wants me to automate this when a new device gets enrolled in intune via Windows Autopilot.

In the past, I’ve often used Azure Monitor with Alerts and Runbooks to perform tasks like this.
But since I’ve dug a little bit into Azure Functions in my last project, I decided to go with an alternative approach this time. (Because Azure Functions are so damn awesome and a shout-out to Laura Kokkarinen, her blog post helps me a lot: Link )

So lets start, that’s the plan:

  1. Logstreaming MEM operational logs and looking for a specific “event”.
  2. Redirect formatted output of a specific event to a new logstream
  3. The redirected event triggers Azure Functions via binding
  4. Azure Function calls Microsoft Graph with a token from the Managed Identity endpoint (that’s by far the coolest part)

Create the necessary Event Hubs

We create two Event Hubs in our new namespace, one for operational logs and one for the filtered enrollment events (we will get into that later).

Forward MEM Operational Log to Event Hub

Return to the MEM Admin Portal and configure log forwarding.

Note: Now is a good time to enroll a test device, so you have some log entries to play with.

Configure Azure Stream Analytics Job

Let’s take a look at the logstream. We navigate back to our event hub namespace and open our previously selected event hub.

We save the query as a stream analytics job.
We add output to our “Analytics Job”.
Don’t forget to start the job (unfortunately discovered after 2 hours of troubleshooting :-))
Note: Again, now is another good time to enroll a device, so we can validate if entries are received by our event hubs.

Create Azure Function and bind to Event Hub

We now create a new Azure Function App with your favorite runtime stack. I usually go with Powershell and my demo code is also written in Powershell (so if you want simple copy and paste –> select Powershell Core).
The rest of the settings are good by default, and the serverless plan is the most beaty one :-). Application Insights give us a historical view.

Next we create our first function in our new Function App and bind it with our “newenrolleddevice” event hub.
Click create and the portal brings us to our new function, where we go to the “Code + Test” section and enter the following code and click “Save”.

param($eventHubMessages, $TriggerMetadata)

# Write-Host "PowerShell event hub trigger function called for message array: $eventHubMessages"

$eventHubMessages | ForEach-Object {

    # get Intune device id
    $jsonOut = $_ | convertto-json
    Write-Host "Processing event: $jsonOut"

    $deviceID = $_.properties.IntuneDeviceId
    Write-Host "DeviceID: $deviceID"

    try {
        # request accesstoken from managed identity
        Write-Host "Trying to get authentication token from managed identity."
        $authToken = Receive-MyMsiGraphToken

        #Invoke REST call to Graph API
        Write-Host "Call Microsoft Graph to remove primary user from device."
        Remove-MyPrimaryUser -IntuneDeviceID $deviceID -AuthToken $authToken
    }
    catch {
        Write-Error $_
    }
}

As you might see, I use 2 helper functions in this example. “Receive-MyMsiGraphToken” and “Remove-MyPrimaryUser”, we add this function to the “profile.ps1”. The “profile.ps1” file loads every time the function does a cold start.

We append the following code to our “profile.ps1” file.
function Receive-MyMsiGraphToken {
    $Scope = "https://graph.microsoft.com/"
    $tokenAuthUri = $env:IDENTITY_ENDPOINT + "?resource=$Scope&api-version=2019-08-01"

    $splatt = @{
        Method = "Get"
        Uri = $tokenAuthUri
        UseBasicParsing = $true
        Headers = @{
            "X-IDENTITY-HEADER" = "$env:IDENTITY_HEADER"
        }
    }
    $response = Invoke-RestMethod @splatt
    $accessToken = $response.access_token

    if ($accessToken) {
        return $accessToken
    }
    else {
        throw "Could not receive auth token for msgraph, maybe managed Identity is not enabled for this function"
    }
}
function Remove-MyPrimaryUser {
    param (
        $AuthToken,
        $IntuneDeviceID
    )
    $splatt = @{
        Method = "DELETE"
        Uri = "https://graph.microsoft.com/beta/deviceManagement/managedDevices('$IntuneDeviceID')/users/`$ref"
        UseBasicParsing = $true
        ContentType = "application/json"
        # ResponseHeadersVariable = "RES"
        Headers = @{
            'Content-Type'='application/json'
            'Authorization'= 'Bearer ' +  $AuthToken
        }
    }
    $result = (Invoke-RestMethod @splatt).value

    if ([string]::IsNullOrEmpty($result)) {
        return $true
    }
    else {
        throw "Removing primary user from device ('$IntuneDeviceID') failed"
    }
}

Add MS Graph permissions to the Azure Function App

Now we have everything in place, for our final part. We have to add some permissions to our Azure Function App.

We enable the managed identity for our function app and we copy the object ID to our clipboard because we need it in the next step.
# replace with your managed identity object ID
$miObjectID = "place your object id here"

# MS Graph app ID
$appId = "00000003-0000-0000-c000-000000000000"

# replace with the API permissions required by your app
$permissionsToAdd = @(
    "DeviceManagementManagedDevices.ReadWrite.All"
)

Connect-AzureAD

$app = Get-AzureADServicePrincipal -Filter "AppId eq '$appId'"

foreach ($permission in $permissionsToAdd) {
    $role = $app.AppRoles | Where-Object Value -Like $permission | Select-Object -First 1
    New-AzureADServiceAppRoleAssignment -Id $role.Id -ObjectId $miObjectID -PrincipalId $miObjectID -ResourceId $app.ObjectId
}


# Restart app after changing permission

Try it out

We reset and reenroll our test device/VM and take a look. After some time, we should see the message received by our “newenrolleddevice” logstream.

And some “success” messages in our function monitoring

Some final words

  • This is only an example, so please feel free to select different tiers and plans to meet your needs
  • Why redirecting into an new Event Hub? For demonstration purposes only, if you use this method in an environment with thousands of clients, you can easily reduce the number of times your function is invoked.
  • The possibilities are basically endless; tagging based on geolocation, joining groups based on properties, which are not supported at the moment, etc…

Azure Virtual Desktop and AzureAD joined VM

Since some time it is possible to join a Windows VM to Azure AD directly. Now this is also possible with Azure Virtual Desktop.

This Blogpost will show all my steps until I am possible to login to my Windows 10 System.

Hostpool

Create a host pool

First of all we need some basic informations such as pool name.

next to the basics we need to define: VM Size, VM Availability, Image type and the number of VMs.

General Settings

addition to that we can use an existing network or we are able to create a new one.

Network Settings

After these Settings we need to define which domain we want to join. Her we can now choose between Active Directory and Azure Active Directory.

I have chosen AzureAD.

Workspace

During the host pool creation it is possible to create a assignment to a workspace. I have created tech-guy-workspace as a new one.

Roles and Permissions

With AzureAD joined devices we need to create a role assignment and a app group assignment. With each host pool one default app group will be created. In my test lab it is called “tech-guys-personal-pool-DAG”.

default app group

within this app group we are able to assign users

2nd task is to assign rbac role to at least the virtual machine to that we want to login. I prefer to assign that role to my resource group that I have that assignment for all future host as well.

there are 2 roles we need to consider about.

RBAC Roles

As it says the first role is useful when you want to login and want to have admin privileges on that machine. second group is only for your users that they are able to login without admin permission. In my lab I assigned my test user to “virtual machine user login” and my cloud only user “virtual machine administrator login” role.

To access host pool VMs, your local computer must be:

  • Azure AD-joined or hybrid Azure AD-joined to the same Azure AD tenant as the session host.
  • Running Windows 10 version 2004 or later, and also Azure AD-registered to the same Azure AD tenant as the session host.

Host pool access uses the Public Key User to User (PKU2U) protocol for authentication. To sign in to the VM, the session host and the local computer must have the PKU2U protocol enabled. For Windows 10 version 2004 or later machines, if the PKU2U protocol is disabled, enable it in the Windows registry as follows:

  1. Navigate to HKLM\SYSTEM\CurrentControlSet\Control\Lsa\pku2u.
  2. Set AllowOnlineID to 1

and here we go.

If you need to use an other client rather than the windows one, than you need enable the RDSTLS protocol. Just add a new custom RDP Property to the host pool, targetisaadjoined:i:1. Azure Virtual Desktop then uses this protocol instead of PKU2U.

Azure Backup – App Consistent

Diagram showing Linux application-consistent snapshot by Azure Backup.
https://docs.microsoft.com/en-us/azure/backup/backup-azure-linux-database-consistent-enhanced-pre-post

Recently I migrated some Linux Systems with Azure Migrate from a VMWare environment to Azure. We also used Azure Backup to have a daily backup of all VMs and of all Databases as well, but we had not application consistent one. I needed some troubleshooting time to figure out how it works. This step by step guide shows an example how I did it and how to prepare a test environment. This includes how to installs MySQL, creating a Database and how to configure Azure Backup to have an app consistent Backup.

  1. Install MySQL
  2. Create a Database
  3. Configure Azure Backup

Install MySQL

Prerequisites

To follow this guide you need to use (because I did 😉 ):
– Ubuntu 20.04

$ sysop@linux01:/$ sudo apt update

output:

$ sysop@linux01:/$ sudo apt install mysql-server

$ systemctl status mysql.service

output:

Create Test DB

$ sudo mysql
mysql> create database techguysdb;

mysql> show databases;

output:

Configure Azure Backup

To configure Azure Backup you need to do the following:

  1. Download and prepare VMSnapshotPluginConfig.json
  2. prepare pre and post script
  3. enable Azure Backup for your Linux VM
  4. shutdown Linux VM and do a backup
  5. start Machine and do a second backup

VMSnapshotPluginConfig

I followed the Microsoft documentation https://docs.microsoft.com/en-us/azure/backup/backup-azure-linux-app-consistent

First we need to download the VMSnapshotPluginConfig.json file here: https://github.com/MicrosoftAzureBackup/VMSnapshotPluginConfig.

{
“pluginName” : “ScriptRunner”,
“preScriptLocation” : “”,
“postScriptLocation” : “”,
“preScriptParams” : [“”, “”],
“postScriptParams” : [“”, “”],
“preScriptNoOfRetries” : 0,
“postScriptNoOfRetries” : 0,
“timeoutInSeconds” : 30,
“continueBackupOnFailure” : true,
“fsFreezeEnabled” : true
}

This file contains different values that need to be changed to fit to the current environment. My file look like this:

{
“pluginName” : “ScriptRunner”,
“preScriptLocation” : “/scripts/pre.sh”,
“postScriptLocation” : “/scripts/post.sh”,
“preScriptParams” : [“”, “”],
“postScriptParams” : [“”, “”],
“preScriptNoOfRetries” : 2,
“postScriptNoOfRetries” : 2,
“timeoutInSeconds” : 30,
“continueBackupOnFailure” : false,
“fsFreezeEnabled” : true
}

I changed “script location” and “continueBackupOnFailure” (this change helped me to see an error message within azure backup jobs, if one script fails)

VMSnapshotPluginConfig.json need to be copied to “/etc/azure”. If this do not exit, simply create. After that we need to change the permission to that file that only “root” has read and write permissions.

sysop@linux01:/etc/azure$ sudo chmod 600 VMSnapshotPluginConfig.json

Output of ls -l:

Pre and PostScript

To have a pre and a post script I used the examples from veeam https://bp.veeam.com/vbr/VBP/4_Operations/O_Application/mysql.html

my pre-script looks like this:

my post script looks like this:

both scripts must be copied to the Linux system. I copied it to /scripts. Next important task is to set permissions to 600 to both files otherwise azure backup will fail.

sysop@linux01:/scripts$ sudo chmod 600 pre.sh
sysop@linux01:/scripts$ sudo chmod 600 post.sh

Backup

enable Backup for a Virtual Maschine

if the backup is enabled it looks like this. It is only configured but has never been executed. Restore points overview shows no backup.

1st Backup

Very important is that the first Backup needs to be done when the virtual machine is deallocated!

then run backup-job as configured

The Backup includes two steps. 1st take a snapshot, second is to copy data to the vault.

When the snapshot task is done the linux-system can be started and our vault shows a crash consistent backup

2nd backup

if the VM is up and running all scripts and config files are in place we can trigger the second backup. now the service should use all configuration and the result should be an app consistent backup 🙂

and here we go…

Hope that step by step guide helps to get this working.

about Mr. Chris

Christopher is a running swiss army knife for IT. Since 20 years he works mostly focused on microsoft technologies across the board with some greater sidesteps into networking and opensource configuration management. Currently he mainly works on hybrid environments by implementing various microsoft cloudservices and do some magic with powershell ;-).

Page 1 of 2

Powered by WordPress & Theme by Anders Norén