Tuesday, September 27, 2016

EOP not Moving Messages to Junk Email On-Premises

Exchange Online Protection (EOP) is Microsoft's solution for anti-spam and anti-malware. It is included as part of Office 365/Exchange Online and you can subscribe to it for on-premises Exchange.

When you implement EOP, you configure the MX records for your domain to deliver messages to EOP, and then EOP forwards to your Exchange server. If a message contains mailware or is obvious spam, it is typically blocked and not forwarded to your Exchange server. It can be quarantined in EOP or discarded.

Where is gets a bit tricky is the messages that might or might not be spam. They're spammy, but might be legitimate email. In Office 365, those messages are automatically moved to your Junk Email folder. For spammy messages to be moved to your Junk Email folder in on-premises Exchange, you need to create a couple of transport rules.

EOP adds an X-Forefront-Antispam-Report header to messages after they are evaluated. You need to create transport rules in your on-premises Exchange to read the value in this header and set the SCL (spam confidence level) value for the message. Exchange Server uses the SCL value to determine whether a message is moved to the Junk Email folder.

Microsoft indicates that you should create the following two rules:
New-TransportRule "EOPSpam1" -HeaderContainsMessageHeader "X-Forefront-Antispam-Report" -HeaderContainsWords "SFV:SPM"
-SetSCL 6
New-TransportRule "EOPSpam2" -HeaderContainsMessageHeader "X-Forefront-Antispam-Report" -HeaderContainsWords "SFV:SKS"
-SetSCL 6

Notice that these rules set the SCL value  to 6. Which means that, by default, Exchange Server will mark the messsages as spam and send them to the Junk Email folder.

However, I recently had a client where after configuring these rules, there were obvious spam messages still not going into Junk Email. At some point, this organization had changed the threshold for the SCL value that identifies spam. They had a value of 8.

So, when you implement these rules, you should also verify the SCLJunkThreshold configured for the Exchange organization. You can view the SCLJunkThreshold with the following command:
Get-OrganizationConfig | FL SCL*

If you need to change the SCLJunkThreshold, use the following command:
Set-OrganizationConfig -SCLJunkThreshold 6

Microsoft article about the creating the transport rules for Junk Email processing:
X-ForeFront-Antispam-Report Values:

Monday, September 26, 2016

Updated MCSE Certification for Exchange

I got a surprise email today indicating that I have a new Microsoft certification, the MCSE (Microsoft Certified Systems Expert): Productivity. This is the new MCSE certification that encompasses certification for Office 365, Exchange, Skype for Business, and SharePoint. The existing MCSE: Messaging is being retired March 31, 2017.

The overall MCSE certifications have been reorganized around the technical competencies that Microsoft partners are organized around. So, there are going to be less MCSE certifications with a wider focus:
  • MCSE: Productivity (includes Exchange)
  • MCSE: Cloud Platform and Infrastructure (Windows Server and Azure)
  • MCSE: Mobility (Windows 10 and Intune)
  • MCSE: Data Management and Analytics (SQL Server)
One of the big changes is how the MCSE ongoing certification is managed. The current MCSE: Messaging required recertification every 3 years to retain the MCSE. There was a specific recertification exam or you had the option to complete a series of MVA courses.

In the new MCSE: Productivity, you do not need to re-certify every three years, but you have the option to update your MCSE each year by taking a new elective exam. The desire for the new MCSE is that you are constantly improving you skills by adding to them each year. Effectively, now instead of getting a permanent MCSE, you get an MCSE for the year. And you can continue to get it yearly.

You also get more options for how to maintain your MCSE: Productivity. The base certification for the MCSE: Productivity is the MCSA: Office 365. Then you add a new elective each year. Some of the options are:
  • Designing and Deploying Microsoft Exchange Server 2016
  • Core Solutions of Microsoft Skype for Business 2015
  • Deploying Enterprise Voice with Skype for Business 2015
  • Core Solutions of Microsoft Exchange Server 2013
  • Advanced Solutions of Microsoft Exchange Server 2013
  • Managing Microsoft SharePoint Server 2016
  • Core Solutions of Microsoft SharePoint Server 2013
  • Advanced Solutions of Microsoft SharePoint Server 2013

Microsoft provides a FAQ about the new MCSE certifications here:

Details about the MCSE: Productivity are here:

And a blog posting on Born to Learn explaining the change:

Wednesday, August 31, 2016

Script to Create Migration Batches

Migration batches are a nice feature introduced in Exchange 2013 for managing mailbox moves. In general they work pretty well, but it can be a bit awkward to work with batches in large organizations. The graphical interface only displays 500 mailboxes at a time and this can be limiting.

To get the batches exactly as you want them, you often end up exporting a list of mailboxes to a CSV file and then cutting that down into batches. You can create a batch by importing a CSV file.

To simplify this process, I've created a script that takes all the mailboxes from a list of source databases and generates the CSV files in batch sizes that you specify. For example, you can use both Admin1DB and Admin2DB as your source databases. If you specify a batch size of 200 mailboxes, then you'll get CSV files with 200 mailboxes except for the final CSV which has the left overs.

The script can also automatically create the migration batches if you turn that option on. In the script, you can specify one or more destination mailbox databases for the moves.

The script does not specify an archive database. If your users have archive mailboxes, then you'll need to edit the migration batches after creation and specify the correct databases for the archive mailboxes.

A final note: The script creates the migration batches, but does not start them. Starting the migration batches and the timing of it is up to you.

The script is below. I hope you find it useful.

 #This script selects users from existing source mailbox databases  
 #and creates migration batches to new Mailbox databases  
 #Typical use is migration projects, but could also be used for  
 #server retirement or cleaning up corrupted mailbox databases  
 #Multiple migration batches are created but you need to start them.  
 #Archive mailboxes are not included when the migration batches are  
 #created. To move archive mailboxes, edit the migration matches  
 #after they are created.  
 #Created by Byron Wright (@ByronWrightGeek)  
 #Set variables for creating batches  
 #BatchSize is the number of mailboxes in each batch  
 #BatchName is text added on to the CSV name and batch name  
 #This is useful to uniquely identify groups such as  
 #Admin or Students  
 #Path for CSV files  
 #This patch must already exist  
 #Mailboxes in the the source mailbox databases that are put into migration batches  
 #One or more databases can be used  
 $SourceMbxDB="Mailbox Database 0999986598","VIP Users"  
 #When $CreateMigrationBatch is $true a migration batch is created from  
 #each CSV file. If it is $false then only the CSV files are created.  
 #If $false then you don't need to configure the destination database  
 #Batches move mailboxes to the destination mailbox databases  
 #One or more databases can be specified  
 #When multiple databases are specified, the mailboxes are  
 #spread among the databases based on number of mailboxes and  
 #not size of mailboxes.  
 $DestinationDB="DB1","Mailbox Database 1840440945"  
 #Get list of mailboxes to move  
 Foreach ($s in $SourceMbxDB) {  
      $Mbx+=Get-Mailbox -Database $s  
 #Add a batch number property to each mailbox  
 #Add the EmailAddress property to each mailbox as required for the New-MigrationBatch CSV  
 Foreach ($m in $Mbx) {  
     $m | Add-Member -NotePropertyName Batch -NotePropertyValue $batch  
     $m | Add-Member -MemberType AliasProperty -Name EmailAddress -Value PrimarySMTPAddress  
 #CreateCSV files and migration batches  
 For($b=1;$b -le $TotalBatches;$b++) {  
      $mbx | Where-Object Batch -eq $b | Select-Object Name,EmailAddress,Batch | Export-CSV $BatchPath -NoTypeInformation   
      If ($CreateMigrationBatch -eq $true) {  
           New-MigrationBatch -Name $BatchFileName -CSVData ([System.IO.File]::ReadAllBytes(“$BatchPath”)) –Local –TargetDatabase $DestinationDB -AllowUnknownColumnsInCsv $true  

Thursday, August 25, 2016

Interpreting RepAdmin.exe /ReplSummary

One of the basic tasks you can do to verify Active Directory replication health is to run RepAdmin.exe /ReplSummary. The question becomes, what exactly do the results mean?
If you’re looking for a quick analysis, here it is. With no fails and all largest deltas less than 1 hour, you’re all good.

Now, for a more detailed look…

Total is the number of replication connections that a domain controller has with other domain controllers. The number of connections is probably higher than you expect because a separate connection is created for each Active Directory partition that’s being replicated. Most of the time you will have 5 per domain controller (domain, configuration, schema, DomainDnsZones, ForestDnsZones).

Fails is the number of connections that are not replicating properly. The number of fails should be zero.

Largest delta is where some of the confusion comes in. This is the longest period of time that a connection has not been used between two domain controllers. So, if a domain controller has one connection that has not replicated for 45 minutes and the others have replicated in less, then this value is 45 minutes. That is why this value tends to be high.

Your other thought is likely: How can it take up to an hour for replication to occur?

Within an AD site, all changes are replicated within seconds. However, there are partitions that do not change often. In particular, the schema partition probably changes only every few months. However, even when there are no changes, the connection for a partition will communicate occasionally to verify that it didn’t miss a change notification (polling). The default value for polling within a site is once per hour. That’s where the up to 60 minutes comes from.

You can change the time limit for polling within a site to twice per hour or four times per hour. That will reduce largest delta value, but it won’t really make a difference in the replication of changed AD data. You’re just triggering that polling to happen more often. And, almost all of the time, polling doesn’t trigger any data replication because replication notifications within the site would have already triggered it.

If you do want to change this value, it’s done in the Schedule for the NTDS Settings object in the AD site. It can also be modified for each connection individually, but it’s preferred to do it at the site level.

Between sites, the schedule on the site links will determine how often the polling happens. Most organizations that I work with have dropped the schedule down to every 15 minutes from the default of 180 minutes. If it were left at the default of 180 minutes then largest delta could range up to 180 minutes.

Friday, August 12, 2016

Update Mount-ADDatabase for PowerShell v2

I'm working on some Active Directory (AD) disaster recovery projects right now and one of the recovery methods we're implementing is AD snapshots. With AD snapshots, you have a copy of your AD data to identify and recover from accidental changes.

The client I'm working with has Windows 2008 R2 with PowerShell 2.0 for their domain controllers. PowerShell is my preferred method for automating anything at this point but AD snapshots don't have any PowerShell cmdlets.

Fortunately Ashley McGlone, a Microsoft PFE, has created some PowerShell functions that help you manage and use AD snapshots. One of the coolest things in there is a function (Repair-ADAttribute) that lets you pull attributes from the snapshot and apply them to the same object in the production AD. You can read more about these functions and download them from these two locations:
The minor issue I ran into is with the Mount-ADDatabase function. This function has a -Filter parameter which displays a list AD snapshots and lets you choose which one to mount. In Ashley's function this is done by using Out-GridView with the -OutputMode parameter which requires PowerShell v3. Using Out-GridView is an easy way to allow the user to select the snapshot. I wish it worked for my servers using PowerShell v2. Here is the line from the function:

 $Choice = $snaps | Select-String -SimpleMatch '/' |  
       Select-Object -ExpandProperty Line |  
       Out-GridView -Title 'Select the snapshot to mount' -OutputMode Single  

For my project, getting all of the DCs upgraded to using PowerShell v3 would take a while. I also didn't want to leave the project in a place where a whole bunch of manual steps were required to mount an AD snapshot older than the previous day. So, let's convert this to a method that works in PowerShell v2.

Now I needed a way to convert a list of snapshots in to a menu. My starting point was a TechNet discussion posting from Grant Ward (Bigteddy). You can view his solution for a discussion here:
Using that example I created the following code:

 $choices = $snaps | Select-String -SimpleMatch '/' | Select-Object -ExpandProperty Line  
 $menu = @{}  
 foreach ($s in $choices) {  
      Write-Host "$i --> $s"  
 [int]$ans = Read-Host 'Enter selection'  
 $Choice = $menu.Item($ans)  

This code takes the list of snapshots in the variable $snaps and does two things:
  • Writes a menu to the screen
  • Add each menu item to the array $menu
After the menu is displayed on the screen and the user selects an option (the $ans variable), the option is used to place the snapshot name into the $Choice variable for further processing. Now we have a version that works in PowerShell v2.

Monday, August 8, 2016

Finding the User or Group Name from a SID

I'm working on project where we needed to set AD security permissions in a test environment based on the permission based on production. When I generated a report of AD permissions that had been applied, several of the entries came back with SID numbers instead of user or group names. Typically this means that the user or group has been deleted, but I wanted to confirm.

I wanted to take the SID and identify the user or group account that was associated with it. After a quick search I found a few examples that looked similar to this:

 $objSID = New-Object System.Security.Principal.SecurityIdentifier("S-1-5-21-1454471165-1004335555-1606985555-5555")  
 $objUser = $objSID.Translate([System.Security.Principal.NTAccount])  

Above example taken from: https://technet.microsoft.com/en-us/library/ff730940.aspx

It seemed to me that there had to be an easier way using the ActiveDirectory module for PowerShell which isn't used by these examples. Good news, there is!

You can't use Get-ADUser or Get-ADGroup to identify the SID name because it could be either one. However, you can use Get-ADObject:

 Get-ADObject -Filter {objectSID -eq "S-1-5-21-1454471165-1004335555-1606985555-5555"}  

If the command does not return any results then there is no AD object with that SID.

Friday, July 29, 2016

Finding PCs Infected with WORM_ZLADER.B

A virus has recently been making the rounds that propagates by renaming folders and storing an executable file in a Recycle Bin folder. Just today we saw this on some shared folders.

Here is a link with more info about the virus:

So, we can identify that someone is infected by finding the $Recycle.Bin folders in the shares. But we need to track down where this came from. To do that we want to see the owner of the $Recycle.Bin folder because that is the person that created it.

You can't get the owner of the $Recycle.Bin folder by using Windows Explorer because Explorer treats this as a special folder type and limits what you can see. However, you can find the owner by using Get-ACL in PowerShell.

On a large file server with many shares, it's useful to scan the whole server to verify the extent of the infection. The script below starts in the current directory, finds all of the Recycle.Bin folders, displays the full folder path and the owner of each folder.

 $Folders=Get-ChildItem -Force -Recurse -ErrorAction SilentlyContinue | Where {$_.psiscontainer -eq $true -and $_.Name -like '$Recycle.B*' }  
 Foreach ($f in $Folders) {  
   $owner=($f | get-acl).owner  
   Write-Host "Folder: $Name"  
   Write-Host "Owner: $owner"  
   Write-Host ""  

Some others have written scripts to rename folders back to their original names which is useful if many folders were infected. See these links for more information: