Friday, September 27, 2019

Azure AD Connect 1.4.x.0 Deletion Threshold Exceeded

Azure AD Connect is configured to perform automatic updates by default. When version 1.4.x.0 (in my case 1.4.18.0) is installed, device objects previously synced to Azure AD might be removed. Previous versions of Azure AD Connect synchronized devices that were not relevant. So, this release is cleaning them up.

For details, see:
In larger organizations, the number of devices deleted might be more than 500 which exceeds the deletion threshold. At this point, Azure AD Connect stops syncing. You might not notice it right away, but any new user accounts will not be synced up to Azure AD/Office 365.

In the Synchronization Service app, you will see a line with the status of:
stopped-deletion-threshold-exceeded

Before you attempt to fix the issues, you should verify that it is only device objects an not another accidental deletion issue. The steps for this from Microsoft are:

  1. Start Synchronization Service from the Start Menu.
  2. Go to Connectors.
  3. Select the Connector with type Azure Active Directory.
  4. Under Actions to the right, select Search Connector Space.
  5. In the pop-up under Scope, select Disconnected Since and pick a time in the past. Click Search. This page provides a view of all objects about to be deleted. By clicking each item, you can get additional information about the object. You can also click Column Setting to add additional attributes to be visible in the grid.

The fix for this issue is to allow the device deletes to occur by either increasing the threshold or disabling the threshold. You do this on your Azure AD Connect server using PowerShell.

To disable the threshold:
Disable-ADSyncExportDeletionThreshold
To increase the threshold:
Enable-ADSyncExportDeletionThreshold -DeletionThreshold 1000
To set the threshold back to default:
Enable-ADSyncExportDeletionThreshold -DeletionThreshold 500
 The Microsoft documentation about the deletion threshold is here:

 

Tuesday, September 17, 2019

Your administrator has blocked this application

I do a lot of work with Powershell and Office 365. To allow for multi-factor authentication when managing Exchange Online, you can use the Microsoft Exchange Online Powershell Module.

I installed the Microsoft Exchange Online Powershell Module on my computer some time back and had used it successfully. However, at some point it stopped working and gives the error:

Your administrator has blocked this application because it potentially poses a security risk. Your security settings do not allow this application to be installed on your computer.
 

For a while, I've been connecting with normal Powershell for management, but today I wanted to get this thing fixed. This error can apply to ClickOnce applications in general. It is not specific to the Microsoft Exchange Online Powershell Module.

There are trust levels that you can define for ClickOnce applications. These are set in HKLM\Software\Microsoft\.NETFramework\TrustManager\PromptingLevel. There are settings for different security zones. On my system, all of the zones were set to Disabled.



Valid values for these settings are:
  • Disabled. App is never allowed.
  • AuthenticodeRequired. User is prompted to allow app only if the app is digitally signed by using a trusted certificate that identifies the publisher.
  • Enabled. User is prompted to allow app even if not digitally signed.
I have set my zones to AuthenticodeRequired. Any app I'm using should be signed by the publisher and not self-signed. After making the change, I'm prompted to allow the app.



It seems odd that I needed to do this, but I have some Visual Studio components installed on this computer and that might have created the registry keys and set them to disabled.

Microsoft documentation for these registry keys is here:

Saturday, September 14, 2019

SAGE 50 Email Integration Woes

Sage 50 is a pretty common app in Canada for doing small business accounting. However, one of it's major drawbacks is really poor email integration. I think they've improved it somewhat in recent versions, but there is a MAPI dependency.

If you install the 64-bit version of Office, then Sage 50 will not be able to use Outlook to send messages. Now that 64-bit Office is the default for Office 365, you need to watch for that as step one. However, yesterday, on a new install of Sage 50, it wasn't working even with the 32-bit version of Outlook.

We got the error:
Sage 50 cannot communicate with your e-mail program. Please ensure that your email program is MAPI-compatible and that it is the default MAPI client

You also need to have Outlook configured as the default mail program. The Mail program in Windows was configured as the default. So, we changed that to Outlook. Still no luck. Same error.


The final fix for me was adding a registry key. According to a few people in discussion forums, without that key, Sage 50 doesn't load the mapi32.dll that's required to send email. Silly that this is still happening.

In the registry at HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Windows Messaging Subsystem, I needed to create a new String Value (Reg_SZ) named MAPI with a value of 1.

Some of the web pages I reviewed during troubleshooting:

Wednesday, July 17, 2019

Visual Studio 2017 TFS Client - Clear Cached Creds

This one is just a note for me.

To clear cached credentials in TFS 2017 browse to C:\Users\\AppData\Roaming\Microsoft\VisualStudio\15.0_ed299a44\Team Explorer and delete the TeamExplorer.config file. You will then be prompted for credentials next time you start the TFS client.

Saturday, June 15, 2019

Hyper-V SCSI Controller Error

I recently upgraded storage on my Hyper-V server that hosts all of my virtual machines to SSD drives. As part of this, I got lazy for some of the VMs and copied the files manually from drive to drive by using File Explorer rather than moving the VM storage by using Hyper-V Manager. After the disk reconfiguration was done, I got this error for the VMs where I had simply copied the data.
Synthetic SCSI Controller (Instance ID GUID): Failed to Power on with Error 'General access denied error'.
Account does not have permission to open attachment 'PathToVirtualDisk'. Error: 'General access denied error'.


This is a permissions error indicating that the VM doesn't have access to it's own virtual hard disk. As part of my file copying, the VM level permissions were lost. You can see in the screenshot below that only System, Administrators, and Users have permissions. Normally, you should also see permissions for a GUID that represents the VM with Full control.


A quick and dirty fix that you can use for test environments is giving Full control to Everyone. However, for production we can do better than that. We can rebuild the permissions for each VM properly.

The basic process is:
  1. Get the GUID for the virtual machine
  2. Set permissions on the virtual hard disk file for the GUID
We can get the GUID for the virtual machine by using PowerShell:
(Get-VM VMName).VMId.Guid


Then set permissions by using icacls.exe:
icacls.exe VirtualDiskFile /Grant Guid:F

Notice that in the example that granting permissions is combination of the GUID and the permissions that we want to assign. In this case, the GUID is being assigned full control (F) permission. The result is shown in the permissions.


If you only have one or two VMs to fix, this method is pretty quick and easy. However, I had about twenty of them. So, I created a script to handle this.

$VMName = Read-Host "Enter VM Name or name pattern (Example NYC*)"  
   
 $VM = Get-VM $VMName  
   
 Foreach ($v in $VM) {  
   $Disks = Get-VMHardDiskDrive $v  
   $Permissions = $v.VMId.Guid + ":F"  
   
   Foreach ($d in $Disks) {  
     icacls $d.path /grant $Permissions  
   }  
 }   

This script will ask you for the name of the VM. You can enter a single VM name or a text pattern to query a list of VMs.

Then the script loops through each of the VMs, finds the disks for each VM, and sets the permissions.

If you have snapshots on the VMs, it properly sets the permissions on the .avhd file in use for the snapshot. I have not verified whether permissions on the main .vhdx file that the snapshot is based on need to be modified after a snapshot is removed. Worst case, just run the script again for the VM and it will fix it after the snapshot is removed.

Thursday, June 13, 2019

Set PowerShell prompt text

I have an annoying issue where I'm storing scripts in a path so long that it makes it awkward to work at the PowerShell prompt. Almost everything I do is wrapping onto the next line.

So, to set the prompt to static text that doesn't include the path, use the following command:
function prompt {"PS> "}
If I'm working with PowerShell prompts connected to different Office 365 tenants, I'll put in text that identifies the tenant.

If you do need to view the current directory, you can use Get-Location or $pwd.

Unable to add drive to storage pool

I bought some new SSD drives for my test server that I run VMs on. The number of disks the system could handle was maxed out. So, I needed to shuffle around some data as part of the installation process.

During my shuffling, I temporarily added two of the SSD drives and used them as normal drives (not in a storage pool). Later, I deleted the data from those drives and wanted to create a new storage pool with those two drives. However, I found that when I ran the wizard to create the new storage pool, the drives were missing (not listed). They were also not listed in the primordial pool.

Using Server Manager, I tried:
  • removing volumes
  • resetting the drives
  • reinitializing the drives
  • taking the drives offline and online
  • changing between GPT and MBR
I saw some web site references to drives attached to RAID cards having duplicate identifiers, but mine were attached directly to the SATA interface and had unique identifiers. However, when I ran Get-PhysicalDisk, I noticed that the drives had a property CanPool set to False. This seemed a likely explanation for my issue.

After a quick bit of searching, I found that running Reset-PhysicalDisk should change the CanPool property to True. And it did. My surprise is that resetting the drive using Server Manager didn't reset that property to True.


Once I reset the disks with Reset-PhysicalDisk, I was able to create a new storage pool using those disks. And my test server has never been faster!