Monday, November 22, 2021

User Profile Ramifications when Renaming Users on Azure AD-Joined Computers

I'm starting to work more with devices that are Azure AD-joined rather than domain-joined. One of my key questions was what happens to user profiles when an Azure AD user sign-in name (UPN) is changed. I was pleasantly surprised by how well it worked.

For my testing, I created an Azure AD user and signed in to create a profile. During sign-in, I created a PIN for authentication. While signed in, I also configured an Outlook profile and OneDrive. Then I tried changing the domain portion of the username and the userid portion of the username. The results were the same:

  • I could still sign in with the PIN.
  • I could sign in as the same user (username displayed on sign-in screen) with the password.
  • I could sign in with the new username (typed in) and password.

After signing in:

  • The workplace account was updated to the new username.
  • Outlook was still able to sign-in without user intervention and updated the account.
  • OneDrive continued to function without user intervention and updated the account.
  • The same Windows 10 user profile was retained.


Wednesday, November 17, 2021

Managing Microsoft 365 Licenses by using Microsoft Graph

Microsoft has announced that after June 2022, the MSOL and AzureAD cmdlets for managing user licenses in Microsoft 365 will cease working. These cmdlets rely on management functionality that is being retired. To manage licenses programmatically, you need to start using Microsoft Graph.

Here is the announcement:

Microsoft Graph is a web-API that you can use to manage Microsoft 365 users, groups, and services. If you're a programmer, then perhaps the idea of building a web request to perform administrative tasks sounds like a good idea. However, for an admin guy like me that typically uses PowerShell cmdlets for management tasks, building web requests is a bit painful. Fortunately, the Microsoft Graph PowerShell SDK has been released that provides PowerShell cmdlets to access Microsoft Graph features.

To get more information about the Microsoft Graph PowerShell SDK:

 Connecting with Microsoft Graph

Just like you use Connect-AzureAD or Connect-MsolService, for Microsoft Graph, you use Connect-MgGraph. When you connect, you need to specify a scope that defines your permissions. So, unlike previous versions, the connection does not automatically gain the full permissions based on your roles like Global Admin. I haven't experimented with exactly which scopes are required to manage user licenses. However, I can confirm that the following example does work.

Connect-MgGraph -Scopes "User.ReadWrite.All","Directory.ReadWrite.All"

To get more information about Microsoft Graph scopes:

License Structure and Naming

If you've been managing licenses through the web interface in Microsoft 365 or the MSOL cmdlets, you're used to seeing license names such as Office 365 E3. When you manage licenses by using Microsoft Graph, you need to know the SkuId property of the licenses available in your tenant. You can obtain the SkuId for a license by using Get-MgSubscribedSku as shown in the following figure. The SkuPartNumber property is a more user friendly name that you can recognize.

Within each licenses type, there are also service plans. These correlate with apps provided by a license such as Exchange Online (Plan 2). To enable or disable the service plans, you need to use the ServicePlanId for a  service plan. If you place your subscribed SKUs in a variable, you can view the service plans included in the SKU as shown in the following figure.

To get a list of generally available SKUs and their service plans:

Viewing Assigned Licenses

You can view the licenses assigned to a user by using the Get-MgUserLicenseDetail cmdlet as shown in the following example:

Get-MgUserLicenseDetail -UserId

The results of this command return the users license assigned to the user. An array of licenses is returned if multiple licenses have been assigned. Within each license returned, you can view the ServicePlans property to see if any service plans have been disabled for a user.

You can also query assigned license information by using Get-MgUser. The licensing information isn't returned by default and you need to specify that the AssignedLicenses property will be retrieved as shown in the figure below. Notice that these results list the service plans that are disabled for a license.

Querying Users with Assigned Licenses

If you want to query all of the users with a specific license, you can do this by using Get-MgUser with a filter for a specific SkuId. The example below shows the syntax.

Get-MgUser -Filter "assignedLicenses/any(x:x/skuId eq 78e66a63-337a-4a9a-8959-41c6654dfb56)" -Property AssignedLicenses,UserPrincipalName,Id

When you are filtering based on AssignedLicenses there are some limitations on the results returned. By default, only 100 results are returned. If you use the PageSize parameter, you can specify up to 999 results are returned as shown below.

Get-MgUser -Filter "assignedLicenses/any(x:x/skuId eq 78e66a63-337a-4a9a-8959-41c6654dfb56)" -PageSize 999

If you have a larger tenant, and you try to use the All parameter to return results larger than these limits, you will be the following error Get-MgUser : The specified page token value has expired and can no longer be included in your request.To avoid this error, you need to query a list of all users and then filter by using Where-Object as shown below.

$allusers = Get-MgUser -All -Property AssignedLicenses,UserPrincipalName,Id,DisplayName
$A1plusUsers = $allusers | Where-Object {$_.AssignedLicenses.SkuId -contains "78e66a63-337a-4a9a-8959-41c6654dfb56"}

Modifying Assigned Licenses

To add or remove licenses for a user, you use the Set-MgUserLicense cmdlet. When you run the cmdlet, you need to provide the following parameters:

  • UserId. The user being modified. You can specify the user by the object Id or UserPrincipalName.
  • AddLicenses. A hash table that specifies the SkuId of a license being added and the ServicePlanId of any service plans that are being disabled.
  • RemoveLicenses. A string that identifies the SkuId of a license being removed.

The following code shows how to build a hash table for the AddLicenses parameter. This specifies a license SkuId and a service plan that's disabled in the license.

$A1FacultySku = @{
    SkuID = "94763226-9b3c-4e75-a931-5c89701abe66"
    DisabledPlans = "9aaf7827-d63c-4b61-89c3-182f06f82e5c"

The following code lists a SkuID that will be disabled.

$A1PlusFacultySku = "78e66a63-337a-4a9a-8959-41c6654dfb56"

The command that modifies the user license is below. Note that the UserId parameter will accept a UPN also.

Set-MgUserLicense -UserId $User.Id -AddLicenses $A1FacultySku -RemoveLicenses $A1PlusFacultySku

The RemoveLicenses and AddLicenses parameters are mandatory. If you don't provide an empty array, you'll get an error such as Set-MgUserLicense : One or more parameters of the function import 'assignLicense' are missing from the request payload. The missing parameters are: removeLicenses. If you don't want to remove any licenses, you need to provide an empty array for RemoveLicenses as shown below. If you are only removing licenses, you need to provide an empty array for the AddLicenses parameter.

Set-MgUserLicense -UserId $ -AddLicenses $A1FacultySku -RemoveLicenses @()

If you want to modify the disabled plans for a licenses, you build a new hash table with the license and all of the plans you want disabled. Then you apply the new hash table with the AddLicenses parameter. The new license assignment overwrites the existing license assignment.

If you want to add multiple licenses, you can provide a comma separated list of hash tables. I have not explicitly tested, but I think providing an array with the hash tables would also work.

If you want to remove multiple licenses, create an array with the SkuIDs that you want to remove.

Thursday, October 28, 2021

AADSTS90072 User Does not Exist in Tenant

During a recent migration project from one tenant to another, a test user was unable to sign in. The sign-in page in Office 365 redirected to AD FS on-premises for authentication. The user credentials worked in AD FS and the web browser was redirected back to Office 365. Then this error was displayed: 

Sign in

Sorry, but we’re having trouble signing you in.

AADSTS90072: User account '' from identity
provider'' does not exist in
tenant 'Byron Co' and cannot access the application '4765445b-
32c6-49b0-83e6-1d93765276ca'(OfficeHome) in that tenant. The
account needs to be added as an external user in the tenant
first. Sign out and sign in again with a different Azure
Active Directory user account.

This error indicates that the user authenticated by AD FS does not exist in Azure AD. Based on the UPN, existed both in on-premises AD and Azure AD. So, there is some other property being used to match the two objects after AD FS authentication.

To understand where this broke down, you need to understand how objects in AD are linked with objects in Azure AD. There is an ImmutableID property on Azure AD users that links Azure AD users to on-premises AD users. Early implementations of Azure AD Connect (or Dirsync) copied the object GUID from on-premises AD and used that value for the ImmutableID. This worked well until you migrated user objects to new AD domain or AD forest where they'd have a different GUID.

New implementations of Azure AD Connect use ms-ds-ConsistencyGUID in the on-premises user object instead of GUID. This value can be copied between domains to preserve synchronization during object migrations. By default ms-ds-ConsistencyGUID is populated with the same value as the object GUID the first time the object is synced.

AD FS authentication uses the object GUID or ms-ds-ConsistencyGUID during authentication. This value must match the ImmutableID in AzureAD to allow authentication to complete. The older instructions for configuring AD FS authentication manually had you configure a rule for object GUID. If you use Azure AD Connect to configure AD FS then it creates rules that use ms-ds-ConsistencyGUID if populated or object GUID. This article talks about the rule configuration:

If you update your deployment of Azure AD Connect to use ms-ds-ConsistencyGUID as the source anchor and forget to update AD FS to allow ms-ds-ConsistencyGUID in the authentication process, AD FS authentication will continue to work because object GUID and ms-ds-ConsistencyGIUD are the same value by default. However when you start migrating objects and retain the ms-ds-ConsistencyGUID (which will now be different from the object GUID) authentication starts to fail because the token passed back to Office 365 for authentication contains the object GUID which doesn't match the immutable ID/ms-ds-ConsistencyGUID. Thus the error message above.

In our case, the AD migration tool we were using copied the ms-ds-ConsistencyGUID from a source AD domain to target AD domain which caused our authentication issue. Because users were getting new mailboxes in this migration, we didn't need to maintain ms-ds-ConsistencyGUID. Our short term fix was to copy the object GUID value and place that in ms-ds-ConsistencyGUID and immutableID. However, the correct long term solution is to update AD FS to correctly use ms-ds-ConsistencyGUID during authentication.

This article has some examples you can use to convert object GUID to ms-ds-ConsistencyGUID and ImmutableID:

Wednesday, October 20, 2021

Query the Signed in User When Running Script as System

I'm working on a desktop migration project where we run some PowerShell scripts to prepare the computer for migration. As part of this we need the locally signed in user.

Normally, you can obtain locally signed in user from an environment variable:


However, we're running the script as SYSTEM. So, that returned value is incorrect. That value is the username associated with the PowerShell instance.

You can query the signed in user when you run a script as SYSTEM by using Get-WmiObject:

(Get-WmiObject -ClassName Win32_ComputerSystem).Username

Sunday, September 5, 2021

Set User Department Based on OU

When you create dynamic distribution groups on-premises, you have the option to create them based on organizational unit. In Exchange Online (EXO), you don't have this option because Azure AD doesn't have the OUs for your tenant.

There are many attributes available for creating dynamic distribution groups in EXO and one available through the web interface is department. So, to simulate OU-based groups, we can set the department attribute.

To do this, I created a script on a DC that runs once per hour and sets the Department attribute based on the OU. When you run a script as SYSTEM on a DC, it has the ability to modify Active Directory. The function below is the core of the script.

#Function requires the OU as a distinguished name
Function Set-UserDepartment {
        [parameter(Mandatory=$true)] $OU,
        [parameter(Mandatory=$true)] $Department

    Write-Host ""
    Write-Host "Setting department attribute as $Department for users in $OU"
    #Find null values
    $nullusers = Get-ADUser -Filter {Department -notlike "*"} -Properties Department -SearchBase $OU
    #Find wrong value
    $wrongvalue = Get-ADUser -Filter {Department -ne $Department} -Properties Department -SearchBase $OU

    #Create one array of all users to fix
    $users = $nullusers + $wrongvalue

    Write-Host "null value: " $nullusers.count
    Write-Host "wrong value: " $wrongvalue.count

    #Set department
    Foreach ($u in $users) {
        Set-ADUser $u.DistinguishedName -Department $Department # -WhatIf

This function:

  • Expects the OU to be passed as a distinguished name
  • Finds users in the OU (and sub-OUs) with the department set to $null
  • Finds users in the OU (and sub-OUs) with the incorrect department
  • Sets the Department value as specified when you call the function for all users identified

Querying the users that don't have department set correctly and calling Set-ADUser for only those users is much faster than setting all users each time.

The count for null value or wrong value is incorrect when there is a single item because a single item is not an array. You can improve this by forcing them to be an array before populating them. Or, checking whether it's single item first, but for my purposes, this was sufficient.

Within the script, you can call the function as many times as required to set the attributes. You just pass the OU and the department value to the function like below.

#Call function to set department for Marketing
Set-UserDepartment -OU "OU=Marketing,DC=Contoso,dc=com" -Department Marketing

Tuesday, August 10, 2021

Script to Update DNS Record Permissions

 When you have secure dynamic update configured for DNS zones, the individual DNS records are protected by security permissions. The host records are typically secured by the associated computer account. The PTR records are typically secured by the DHCP server account or the associated computer account depending on whether it's a static or dynamic IP address.

If you have highly available DHCP servers, they should be configured with a user account for dynamic DNS updates. This user account is used by both DHCP servers to ensure that records created by one DHCP server can be updated by the other.

If you have highly available DHCP and don't use a shared account, then you'll see errors in the DHCP event log (Event ID 20322) indicating that the DNS record couldn't be updated. After you configure the shared account, the permissions on the DNS records will still be incorrect. The following script adds Full Control permissions on A and PTR records for the shared account. This allows the properly configured DHCP servers to update existing records.

You'll need to do some editing on this script for your environment:

  • $DynamicDnsUser needs to be set to your shared user account.
  • $Server needs to be set to the name of your DNS server.
  • $zones needs to contain the zones you want to modify. 

#DynamicDnsUser is the user configured on both DHCP servers for dynamic DNS
#This uses the SamAccountName of the user
$DynamicDnsUser = Get-ADUser ddnsuser

#Query the SID for this user and create a ACE allowing full control
$SID = New-Object System.Security.Principal.SecurityIdentifier $DynamicDnsUser.SID.Value
$ACE = New-Object System.DirectoryServices.ActiveDirectoryAccessRule $SID, "GenericAll", "Allow"

#When running DNS cmdlets from a workstation
#you need to specify the server you are acting on
$server = "DNSServer"

#query a list of all zones to loop through and set permissions
#$zones = Get-DnsServerZone -ComputerName $server
#$zones = $zones | where ZoneType -eq Primary

$zones = Get-DnsServerZone ""

Foreach ($zone in $zones) {

    #Query all records in the zone
    #Record type ensures that the get the correct type of records
    #based on forward or reverse lookup zones
    If ($zone.IsReverseLookupZone -eq $true) {
        $recordType = "PTR"
    } Else {
        $recordType = "A"

    $records = Get-DnsServerResourceRecord -ComputerName $server -ZoneName $zone.ZoneName -RRType $recordType

    #Loop through all records in the zone and add
    #the ACE for the dynamic DNS user
    #Need to set the location to AD: for the *-ACL cmdlets to work
    Foreach ($record in $records) {

        Push-Location -Path AD:
        $ACL = Get-Acl -Path $record.DistinguishedName
        $ACL | Set-Acl -Path $record.DistinguishedName    

    } #end foreach records

} #end foreach zones

Please be aware this only works to allow the DHCP server to update the DNS records. In most organizations, the host records are dynamically updated by the individual computers. If you want to add the correct computer account to a DNS record, then you need to approach this differently.

The following link has a script that finds a computer account that matches a host record and assigns permissions to that computer account. This script was used as a starting point for my script above.

Wednesday, July 21, 2021

Dynamic DNS Settings for Highly Available DHCP Servers

Windows DHCP servers can integrate with DNS to perform dynamic DNS on behalf of clients. This is useful when DHCP clients such as printers or mobile phones are not able to perform their own dynamic DNS updates. The DHCP server can also perform secure dynamic DNS updates when the client can't.

You can configure dynamic DNS settings at the IPv4 node (server level) or at the individual scope. If you don't configure dynamic DNS settings at the scope level, they are inherited from the server level. If you update dynamic DNS settings at the server level those new settings are used by all scopes that don't have dynamic DNS settings explicitly defined.

Unfortunately, there is no easy way to identify when dynamic DNS settings are configured at the scope level instead of the server level. If the settings are different then they are definitely configured at the scope level. But, if the settings are the same, they could be configured at either level.

When you have scopes configured for high availability with two Windows DHCP servers, then both servers can service the scope. If you have accidentally configure the dynamic DNS settings at the IPv4 node differently on the two servers, it can provide inconsistent settings for clients depending on which DHCP server provides the lease.

For example, DHCP1 and DHCP2 are configured with a failover relationship that is in load balancing mode. Scopes using this failover relationship service half of requests using DHCP1 and half of requests using DHCP1.

At the IPv4 node of DHCP1, it is configured to perform dynamic DNS updates on when requested by the clients.

At the IPv4 node of DHCP2, it is configured to perform dynamic updates for all clients.

If you create a new scope, named Client LAN and configure it to use the failover relationship, the scope appears on both servers. When you view the DNS tab in the properties of Client LAN, the settings match the server settings. So, the settings you see vary depending on which DHCP server that the DHCP admin console is connected to.

When a client leases an address from DHCP1, the dynamic DNS settings from the IPv4 node of DHCP1 are used. When a client leases an address from DHCP2, the dynamic DNS settings from the IPv4 node of DHCP2 are used.

To avoid this, you can do the following:

  • Ensure that the IPv4 settings are the same on both servers (you really should)
  • Manually configure the dynamic DNS settings in each scope

Secure Dynamic Update Credentails

Another consideration when using highly available DHCP with dynamic DNS updates is the credentials for secure updates in DNS. By default, when a DHCP server creates a DNS record that allows only secure dynamic updates, the record is secured with permissions based on the computer account of the DHCP server. When two DHCP servers are working together, this can result in DHCP1 creating a DNS record that DHCP2 can't update.

To ensure that both highly available DHCP servers can service all records created by either server, you need to configure a user account that is used by both servers to secure dynamic DNS records. This is configured on each server on the Advanced tab in the properties of IPv4.

After configuring the DNS dynamic update credentials on both servers, the DNS records are secured by that user account. Since both servers use the same user account, they can update DNS records created by the other DHCP server. This user account does not require any special permissions. It just needs to be a member of Domain Users. And of course, you should set the password to not expire.
If the DNS zones are configured to allow insecure dynamic updates then security is ignored during  dynamic DNS updates and the credentials are not important.

Tuesday, June 1, 2021

DNS Forwarding Timeouts

When you configure forwarders on Windows DNS servers, it's not obvious what the timeout values are. You might intuitively think that more forwarders is better. In reality, with the default values, you're just kidding yourself.

DNS forwarders have a default timeout of 3 seconds. If the first forwarder does not respond within 3 seconds then the second forwarder is contacted, and so forth.

However, there is an overall recursion timeout of 8 seconds. After 8 seconds no more forwarders will be contacted. So, best case, the process looks like this:

  • 0s - Contact forwarder 1
  • 3s - Contact forwarder 2
  • 6s - Contact forwarder 3
  • 8s - recursion timeout (process ends)

As you can see, only the first three forwarders listed are ever used. Putting more than 3 forwarders on a DNS server is misleading because forwarders 4 and up will never be contacted.

Conditional forwarders have a similar process but with different timeout values. Conditional forwarders have a default timeout of 5 seconds along with the recursion timeout of 8 seconds. This means that only two conditional forwarders are ever contacted.

The conditional forwarder process looks like this:

  • 0s - Contact conditional forwarder 1
  • 5s - Contact conditional forwarder 2
  • 8s - recursion timeout (process ends)

Again, I suggest don't ever list more than two conditional forwarders or it is misleading.

If you want to allow additional forwarders or conditional forwarders to be queried, you can modify the default values in the registry of the DNS servers. However, be sure to do this on all DNS servers so that it is consistent. And, document it as part of your domain controller build process so that it is configured on new domain controllers too.

Registry keys to modify the default timeout values:

  • Recursion timeout (per DNS server)
    • HKLM\SYSTEM\CurrentControlSet\Services\DNS\Parameters\**RecursionTimeout
  • Forwarding timeout (per DNS server)
    • HKLM\SYSTEM\CurrentControlSet\Services\DNS\Parameters\**ForwardingTimeout
  • Forwarder timeout (per zone/conditional forwarder)
    •  HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\DNS Server\Zones\ <zone_name>\ForwarderTimeout

For more detailed information about this process, see:

Wednesday, March 24, 2021

Script to compare file presence in directory structures

I'm still running into a few organizations that need to convert their sysvol replication from FRS to DFS-R. The process for doing that is quite straight forward, but if FRS replication is broken, then there is the risk that some data could be lost during this process.

As part of due diligence before converting, I like to compare the contents of sysvol among the domain controllers. The PDC emulator is the default source for new sysvol data. So, I compare that to other domain controllers. To help me with this I've created a PowerShell script that compares the files to identify any that are not the same on two servers.

I use this for sysvol, but you could use it to compare any two data structures. This script does not compare time stamps, file size, or file contents. It only looks for presence.


#Domain controller path to sysvol

#Get list of files
$srcfiles = get-childitem -Path $srcpath -File -Recurse
$targetfiles = get-childitem -Path $targetpath -File -Recurse

#Add property looks only at relative file name path
#Required for proper comparison without server name
Foreach ($file in $srcfiles) {

    $cleanpath = ($file.FullName).replace($srcpath,"")
    $file | Add-Member -NotePropertyName ShortPath -NotePropertyValue $cleanpath -Force


Foreach ($file in $targetfiles) {

    $cleanpath = ($file.FullName).replace($targetpath,"")
    $file | Add-Member -NotePropertyName ShortPath -NotePropertyValue $cleanpath -Force


$dif = Compare-Object $srcfiles $targetfiles -property ShortPath #-PassThru

Write-Host "Source ( <= ) is: " $srcpath
Write-Host "Target ( => ) is: " $targetpath
Write-Host ""

Tuesday, March 23, 2021

0x80070780: The file cannot be accessed by the system

Today got this error when trying to import a virtual machine on my newly rebuilt Hyper-V VM host running Windows Server 2019:

0x80070780: The file cannot be accessed by the system

I tried playing with permissions but it kept erroring out saying that it couldn't access the vhdx files. Finally, I realized that when I rebuilt the VM host, I forgot to install the data deduplication feature and thus the system could see the file names, but not access the deduplicated data.

After I enabled the data deduplication role service and rebooted, the system recognized the drives had deduplicated data and could access the files without issue.

Thursday, February 25, 2021

Synchronize membership from an AD group to a Microsoft 365 group

A client recently wanted to control access to Microsoft Stream content by using members of an AD security group that is already synchronized into Azure AD. However, Microsoft Stream access can only be controlled by using Microsoft 365 groups (formerly Office 365 groups). This led to identifying how we can synchronize the group membership from the AD group to the Microsoft 365 group.

The first option I thought of was Windows PowerShell. This is certainly possible, but it would need to be run as a scheduled task with credentials stored securely. Have reliance on an on-premises scheduled task was suboptimal. So, I went searching and found a template name Synchronize an Azure AD Group with an Office 365 Group on a recurring basis in Power Automate (formerly Flow) that is for exactly this purpose.


At a high level, this is what the flow does:

  • Sets a schedule for running
  • Queries membership from a source group
  • Queries membership from a target group
  • Compares the source and target group membership
  • Adds source members members not present in the target group
  • Sends a notification email identifying source members that were added
  • Identifies target members that are not present in the source group (with the option to provide and exceptions list of target members not to remove)
  • Sends an approval request to remove target members that are not present in the source group
  • Removes target members that are not present in the source group when the request is approved

When you add the template it will prompt you for permissions to create connections required to run the flow. The Office 365 Groups and Azure AD connectors require credentials to query and modify group memberships.

After you've created the flow from the template there are a few items that you need to configure in the flow. The Recurrence box defines how often the flow runs. The example below is configured to run the flow daily at 2am.

Next you need to configure the SourceGroupID and the TargetGroupID. You need to obtain the Object Id attribute for these groups from either the Azure AD admin center or Windows PowerShell. The Object ID for the group is placed in the Value box. There are separate steps for SourceGroupID and TargetGroupID. Effectively, these are populating variables used later in the script.

You also need to define the ApprovedOwnerUPN. This user approves removals from the target group when necessary. Enter the UPN for that user into the Value box.

Members in the target group that are not members in the source group are removed. If there are some unique members that you want to remain in the target group, you can define them in the ExcludedFromRemove variable. This might be useful for cloud only users such as an admin account.

To define the user accounts that are not removed, click on the createArray function in the Value box. This opens a box where you can manually enter the users you don't want to remove. In the screenshot below, and are excluded from removal.

The List source group members and List target group members steps allow you to define a maximum number of results that are returned. By default, this value is 500. You need to define this value large enough to gather the entire membership of each group. Enter the number of group members in the Top box. If you leave this value blank, only 100 results are returned.

If your groups have membership higher than 1000 then you need to perform additional configuration to allow more than 1000 results. Click the ellipsis in the List source group members step and then select Settings to get the following screen. To allow more than 1000 results you need to turn on pagination and specify the number of results you want to allow. The example below allows up to 2000 members. After you've done this, the Top value is ignored.

The only other issue I had when using this flow was related to the officeLocation attribute of the user objects in Azure AD. This attribute correlates with the physicalDeliveryOfficeName attribute in on-premises AD and is visible as Office on the General tab of a user in Active Directory Users and Computers.

If the officeLocation attribute is blank, then the flow fails. In the environments I deal with, this value is often blank. Rather than putting in dummy office information, we can edit the flow to ignore the null value. 

The two affected steps are:

  • Parse UsersAdded values in Send mail if UsersAdded variable is not empty
  • Parse MembersToRemove values in Get approval if UsersToRemove is not empty

Both of these steps have a Schema box that defines how attributes are selected from earlier data. The officeLocation attribute is defined as a string and thus fails when there is a null value. A quick way to fix this problem is by removing officeLocation from the schema. Remove the three lines are highlighted from both steps.

An alternative that I haven't tested is modifying the acceptable values for officeLocation. You can see that above officeLocation, the mobilePhone attribute allows the type to be either string or null. I believe that syntax would also work for officeLocation.

For more information about Power Automate (formerly Flow), see:

Wednesday, February 24, 2021

Cisco AnyConnect blocked in Hyper-V virtual machine

Cisco AnyConnect is popular VPN software. The VPN server can enforce policies on the connecting clients. One control is blocking access from remote desktop connections. I assume that this is primarily to block connections from Remote Desktop servers or Windows 10 Remote Desktop where the same computer might be simultaneously shared by multiple users.

The error message you'll see is:

VPN establishment capability from a remote desktop is disabled. A VPN connection will not be established.

You might see this error when you use Hyper-V Manager to access a virtual machine and run Cisco AnyConnect. By default, the connection to a virtual machine is an enhanced session that is based on RDP. If you disable the Enhanced session setting in the View menu then Cisco AnyConnect will run and connect properly.

What is Azure AD Domain Services?

Sometimes letting go of old information and assumptions is the hardest part of learning something new. That's what I ran into today trying to wrap my head around Azure AD Domain Services. To purge your brain, let me start by saying that Azure AD Domain Services does not behave like an off site domain controller for your on-premises deployment of Active Directory Domain Services (AD DS).

Some quick definitions:

  • Active Directory Domain Services (AD DS) - Commonly called Active Directory, this is your local directory service/domain. You have domain controllers for this domain. This domain holds user and computer objects.
  • Azure Active Directory - Commonly called Azure AD, this is the cloud directory service used for Microsoft cloud services such as Exchange Online and SharePoint Online.
  • Azure AD Connect - This is software that you run on-premises to synchronize users and groups from AD DS on-premises to Azure AD. This allows your users to sign in to Microsoft cloud services by using the same username and password as the local AD.

Azure AD Domain Services (Azure AD DS) is a limited version AD DS that is provided as a cloud service. Like AD DS, it can have user and computer accounts.

The main use case for Azure AD DS is hosting line of business applications in Azure. The virtual machines in Azure can be joined to Azure AD DS and managed by using Group Policy. You have the ability to create organizational units to organize the VMs.

Note: Azure AD DS is a separate domain and not directly linked to your on-premises AD DS.

To simplify user access to resources joined to Azure AD DS, users credentials are the same as those in on-premises AD DS. The UPN and password are the same in both environments. The user accounts are synchronized as follows:

  • On-premises AD DS --> Azure AD Connect
  • Azure AD Connect --> Azure AD
  • Azure AD --> Azure AD DS

Implementing Azure AD DS avoids the need to create a VPN from on-premises to Azure to support hosting a domain controller in Azure. It also avoids the need to manage domain controllers in Azure.

More information:

Tuesday, February 9, 2021

OAuth Certificates with Hybrid Exchange

Older versions of Microsoft Exchange in a hybrid configuration with Exchange Online (EXO) used a federation trust to authenticate connections for free/busy information. Newer hybrid deployments of Exchange 2016/2019 use OAuth authentication instead of federation.

OAuth authentication is reliant on the Auth certificate in your on-premises Exchange. This certificate is created automatically with a lifetime of 5 years when you install Exchange Server on-premises. If this certificate has been replaced, then you also need to update Azure AD with the new certificate information. The simplest way to update the information is by running the hybrid wizard again after you update the Auth certificate.

I wrote a previous post about renewing/updating the Exchange Server Auth certificate here:

If you update the Exchange Server Auth certificate and forget to update the information in Azure AD, you are likely to see free/busy lookups to EXO fail. I recently saw this as a client and decided to dig into the configuration a little bit more.

Testing OAuth Connectivity

You can test OAuth authentication from the Exchange Management Shell on-premises. When doing this, you need specify a local mailbox with the following command:

Test-OAuthConnectivity -Service EWS -TargetUri '' -Mailbox -Verbose | FL

With the FL in the above command you'll see detailed information returned. If you run the command without FL and it's successful, you'll see output like this:


If the test is unsuccessful, you'll see text something like:

  • The remote server returned an error: (401) Unauthorized
  • Unable to get the token from Auth Server
  • Client assertation contains an invalid signature [Reason - The key was not found]

Verifying the Certificate Used for OAuth

To identify the Auth certificate currently used by on-premises Exchange Server, run Get-AuthConfig. In the example below, you can see the thumbprint for the currently used certificate. If the Auth certificate had been updated, it may show a previous certificate thumbprint too. The service name returned by this command is a GUID for Exchange Online.

Once you have the thumbprint of the Auth certificate, you can verify it exists on each of the on-premises Exchange servers. All servers should have the same Auth certificate. Use the following command for each of your on-premises Exchange servers.

Get-ExchangeCertificate ThumbprintFromAuthConfig -Server ServerName

Verifying Certificate Information in Azure AD

The Auth certificate configured in on-premises Exchange Server is used for client authentication to Azure AD for free/busy lookups. The public portion of the certificate is stored in Azure AD for this purpose. You can view this information by using the MSOL or AzureAD cmdlets.

To view the certificate information with the MSOL cmdlets, run the following command:

Get-MsolServicePrincipalCredential -ServicePrincipalName "00000002-0000-0ff1-ce00-000000000000" -ReturnKeyValues $true


This command may return multiple results. In my test environment, there are multiple entries for the same certificate. I assume that this is because it gets added each time I run the hybrid wizard.

The easiest way to verify that the certificate in Azure AD matches your on-premises Auth certificate is by using the StartDate and EndDate fields. These will match the NotBefore and NotAfter properties shown by Get-ExchangeCertificate. Be aware that StartDate and EndDate are in UTC time and the NotBefore and NotAfter are probably showing in your local time zone.

You can also save the Value property in a text file with the extension .cer. Then you can open the .cer file and view the certificate information. This includes additional information such as the Thumbprint which you can verify against the on-premises Auth certificate.


You can use the Azure AD cmdlets to get similar information (but not the certificate value). This is a bit more complex and requires multiple steps.

To get the ObjectID of the SPN for Exchange online:

$spnID = (Get-AzureADServicePrincipal | Where DisplayName -eq 'Office 365 Exchange Online').ObjectID


To list the certificates that have been uploaded:

$certs = Get-AzureADServicePrincipalKeyCredential -ObjectID $spnID


To get the thumbprint of the most recent certificate uploaded:



 Update Auth Certificate in Azure AD

If you find that the on-premises auth certificate is not present in Azure AD, the best solution is running the hybrid wizard again. Running the hybrid wizard will update the OAuth certificate.

If for some reason you don't want to run the hybrid wizard, you can update the certificate manually by exporting it from on-premises and importing it into Azure AD.

Steps 3 and 4 in the following document describe how to manually export and import the Auth certificate:

Links found during research