31 December 2016

Poor Network Performance | Network Shares (Server 2012 R2)

I was recently doing some work for a client where they had noticed that the network performance from their workstations to the File Server was rather poor.  When transferring data to the File Servers (and any other shares on the Virtual Host), it was very slow.

All the VMs were either Server 2016 or 2012 R2, and it was running on a Virtual Host which was Server 2012 R2.  The Server was a Lenovo x3650 m5.  All the NICs were Broadcom and the drivers were fully up to date.

After looking into the issue, I found that some of the settings on the Network Adapters needed to be changed/updated (on the Virtual Host itself) to allow for faster transfer speeds.  To do this, I needed to open up each Network Adapter, then click on Configure, then the Advanced tab.  Once I did this, I had to set the following options to Disabled:
  • TCP/UDP Checksum Offload (IPv4)
  • TCP/UDP Checksum Offload (IPv6)
  • Large Send Offload V2 (IPv4)
  • Large Send Offload V2 (IPv6)
  • Virtual Machine Queues
Just remember that when performing these changes, it will drop the network connectivity to the adapter for about 5 seconds.  If you're making this change on a live host, it will potentially disrupt network traffic to the VMs and the Host.  If you have a NIC Team in place, do this to one Adapter, then wait for it to come back online before doing it to the next one to make sure that network connectivity to the host itself remains active. 

After doing this to the NICs in my NIC Team, I tested the network connectivity and it was considerably faster.

28 December 2016

Directory service is missing mandatory configuration information | Server 2008R2

I was recently demoting a Domain Controller as I had upgraded to Server 2016, when I came across the following error message:


What this means is that the fSMORoleOwner is most likely pointing to the server that you're trying to decommission, and of course you can't do this.  So what needs to be done is to update this to point to another DC that's active.

First, to confirm this, you will need to go into ADSI Edit. Connect to the following:


Once you've done this, open up DC=Infrastructure:


Look for fSMORoleOwner and check the server name that is referenced here:

In this case, it's showing my new DC, however originally it was showing the DC that I was wanting to decommission.  In order to resolve this, I used the following script:

const ADS_NAME_INITTYPE_GC = 3
const ADS_NAME_TYPE_1779 = 1
const ADS_NAME_TYPE_CANONICAL = 2

set inArgs = WScript.Arguments

if (inArgs.Count = 1) then
    ' Assume the command line argument is the NDNC (in DN form) to use.
    NdncDN = inArgs(0)
Else
    Wscript.StdOut.Write "usage: cscript fixfsmo.vbs NdncDN"
End if

if (NdncDN <> "") then

    ' Convert the DN form of the NDNC into DNS dotted form.
    Set objTranslator = CreateObject("NameTranslate")
    objTranslator.Init ADS_NAME_INITTYPE_GC, ""
    objTranslator.Set ADS_NAME_TYPE_1779, NdncDN
    strDomainDNS = objTranslator.Get(ADS_NAME_TYPE_CANONICAL)
    strDomainDNS = Left(strDomainDNS, len(strDomainDNS)-1)
     
    Wscript.Echo "DNS name: " & strDomainDNS

    ' Find a domain controller that hosts this NDNC and that is online.
    set objRootDSE = GetObject("LDAP://" & strDomainDNS & "/RootDSE")
    strDnsHostName = objRootDSE.Get("dnsHostName")
    strDsServiceName = objRootDSE.Get("dsServiceName")
    Wscript.Echo "Using DC " & strDnsHostName

    ' Get the current infrastructure fsmo.
    strInfraDN = "CN=Infrastructure," & NdncDN
    set objInfra = GetObject("LDAP://" & strInfraDN)
    Wscript.Echo "infra fsmo is " & objInfra.fsmoroleowner

    ' If the current fsmo holder is deleted, set the fsmo holder to this domain controller.

    if (InStr(objInfra.fsmoroleowner, "\0ADEL:") > 0) then

        ' Set the fsmo holder to this domain controller.
        objInfra.Put "fSMORoleOwner",  strDsServiceName
        objInfra.SetInfo

        ' Read the fsmo holder back.
        set objInfra = GetObject("LDAP://" & strInfraDN)
        Wscript.Echo "infra fsmo changed to:" & objInfra.fsmoroleowner

    End if

End if

Create a new VBS file with the above script, and called it "FixFSMO.vbs".  Copy this to the desktop of a DC that's active and then run the following command:

cscript fixfsmo.vbs DC=DomainDnsZones,DC=contoso,DC=com



You will also need to run the same command, but for ForestDNSZone.  

cscript fixfsmo.vbs DC=ForestDNSZones,DC=contoso,DC=com

Once you've done this, check the ADSI object again and you will notice this has now updated to an active DC.  Let this sit for 15 minutes or so to ensure that it syncs to all DCs, and then you should be able to re-run the DCPROMO to demote the Domain Controller. 

Migrate DHCP Server to Server 2016

The following process can be followed when you're creating a new Domain Controller, and you'd like to migrate DHCP settings from an old DC to a new one.

I have done this from Server 2008R2 to Server 2016, however this can be used from 2008 to 2016.
  1. Log in to the old (existing) Domain Controller running DHCP
  2. Open up an Administrative Command Prompt
  3. Type the following:
    netsh dhcp server export C:\Users\<username>\Desktop\dhcp.txt all
  4. Copy the .txt file over to the desktop of the new DC
  5. Open up an Administrative Command Prompt
  6. Type the following:
    netsh dhcp server import C:\Users\<username>\Desktop\dhcp.txt all
  7. Open DHCP on the new 2016 server.  You will notice all the settings have now been migrated (including reservations and leases)
Once you've done this, you will then need to authorise the new DC and unauthorise the old DC.  This should happen automatically when you authorise the new DC, however make sure you double check this on the old one. 

To be on the safe as once you've done this, make sure you disable the DHCP Server service on the old DC.  This will ensure it does not start again if you were to reboot the server.

If you require IT Support in Perth, contact Winthrop Australia

Enable Split Tunnelling | Windows 10 VPN

In older versions of Windows (eg 7/8.1 etc) you were able to enable Split Tunnelling by removing the default gateway IP address from the IPv4 settings of a VPN's properties.  This is now not available on Windows 10 and you can't actually click on the IPv4 properties.

In Windows 10, you now need to enable Split Tunnelling through PowerShell.   It is done with a simple command:

Set-VPNConnection "VPN Name" -SplitTunneling $true



To verify that this was successful, you can type the following command to get the details of your VPN connection:

Get-VPNConnection



Winthrop Australia can supply all your IT Support needs in Perth, and most of Australia

21 December 2016

SMTP Relay Not Sending Mail

We have a SMTP Relay configured on a client's server to relay mail from on-prem to their Office 365 tenant.  This allows Scan to Email functionality from printers etc.

Recently a client told me that they're trying to scan to email but it's failing for them.  I created a test email.txt file which was to just send a simple email.  I put this file in the 'Pickup' folder, and it just stayed there.  Usually it's picked up immediately and relayed.


I checked the services and noticed that the Simple Mail Transfer Protocol (SMTP) service was stopped, and for some reason it was set to 'Manual'.  A quick manual start and then changing this to 'Automatic' resolved the issue for me.

If you require IT Support or Consultancy, contact Winthrop Australia

15 December 2016

Missing Application in Task Sequence | SCCM 2012

Recently a client of mine was trying to add an application to be installed as part of the 'Install Application' sequence within an Operating System Deployment Task Sequence.  They were able to find many applications that were available, however they couldn't see this particular one (in this case it was VLC).

The application was showing up under Apps:


When adding it into the Task Sequence, there was no error messages, it was just not there:


To resolve this, go into the application itself, click on the Deployment Type tab, then click User Experience and make sure its set to Logon Requirement: Whether or not a user is logged in.


Once you've done this, it will allow you to see the app and choose to add it into the Task Sequence. 


12 December 2016

Enable Multicast | SCCM 2012

This is a quick post to show you how to enable Multicast deployments through SCCM 2012.


  1. Click Administration
  2. Click Servers and Site System Roles
  3. Click on the SCCM server
  4. Double-click on Distribution Point
  5. Tick the Enable Multicast option


Once you have done that, you will need to enable the multicast distribution for any packages/operating systems you may have.  To do this, do the following:


  1. Click on Software Library
  2. Click on the folder where you've saved your packages or operating systems
  3. Right-click on the Operating System or Package and click on Properties
  4. Click on Distribution Settings
  5. Tick Allow this package to be transferred via Multicast



It is not recommended that you enable Multicast when using SCCM.  This has been known to cause issues with the WDS service constantly crashing. 

Winthrop Australia provides some of the best IT Support in Perth.  Contact us today to find out how we can help you.

Disable Yammer for all users | Office 365

I recently did an Office 365 migration where our client was using E3 licenses.  This includes a Yammer subscription, which they were not interested in using at this stage.  I was asked to disable this service for all users.

To do this, I did the following:

Get-MsolAccountSku | Format-List –property accountskuid,activeunits,consumedunits
This will show you which license pack you're currently using:


Get-MsolAccountSku | Where-Object {$_.SkuPartNumber -eq “ENTERPRISEPACK_FACULTY”} | ForEach-Object {$_.ServiceStatus}
This shows the license packs that are available for this particular O365 License:
In this case we're wanting to disable "Yammer_EDU".
Type the following:
$x = New-MsolLicenseOptions -AccountSkuId “AccountSKUID:ENTERPRISEPACK_FACULTY” -DisabledPlans “YAMMER_EDU”
Note: the bold section is the AccountSKUID which has been blurred out in this case, but can be found here:
To apply this to all users who have a current O365 license, type the following:
Get-MsolUser -all | Where-Object {$_.isLicensed -eq $True} | Set-MsolUserLicense -LicenseOptions $x
This will take ~5 minutes or so depending on the amount of users you have in your Tenant, however once this has completed, you will notice that the Yammer license is now set to 'off'.

09 December 2016

WSUS Not Downloding Updates

I recently had a client who had WSUS setup on Server 2016.  It was trying to downloading some updates after a synchronisation, but it would freeze at 100% and not go any further.

Synchronisations were fine, and it would download the data that's required, however these 5 updates would just sit there.  Checking Event Logs, I saw the following error:


After running the following command I found the following event

"C:\Program Files\Update Services\Tools\WsusUtil.exe CheckHealth"



It looks like the particular file it's trying to download is corrupt.  Checking WSUS to find out what the update is, KB3172989 is actually a CU for Server 2016 Technical Preview.  In this case, it's not needed so it was declined through WSUS.  After doing this, I did a search for all the Technical Preview updates, and declined them as well.  After running a Synchronisation again, it worked well. 

07 December 2016

Exchange 2007 Uninstall Hanging on 'Remove Exchange Files'

I was recently decommissioning an Exchange 2007 server for a client.  When I was going through the installation process, I noticed that it was hanging at the 'Remove Exchange Files' section.

After giving it sufficient time to complete on it's own, I had to go into the Task Manager to stop the PowerShell.exe task.


Simply end the process and PowerShell.exe will start back up immediately.  Once that has done, take a look at the Exchange installation process, and you will notice that it will complete within about 15 seconds or so of stopping this process.


23 November 2016

ExportO365UserInfo.ps1 | You cannot call a method on a null-valued expression

Recently I was performing an Office 365 migration for a client who's on premise Exchange environment was 2007.  Once I had migrated the mail boxes, I needed to convert the users to Mail Enabled Users (MEUs).  This is explained on Microsoft's website here.

There are two scripts you're required to run.  When you're running the first one, which is ExportO365UserInfo.ps1, the MS website states you need to do this from EMS.  This is incorrect.  If you run this script from EMS, you will get the following error message:

You cannot call a method on a null-valued expression.
At C:\migrace\ExportO365UserInfo.ps1:53 char:52
+             $CloudEmailAddress = $CloudEmailAddress.ToString <<<< ().ToLower(
).Replace('smtp:', '')
    + CategoryInfo          : InvalidOperation: (ToString:String) [], RuntimeE
   xception
    + FullyQualifiedErrorId : InvokeMethodOnNull


In order to get around this, and to make the scirpt work, simply run this in normal Powershell instead of EMS.  It will work well and it will create the required cloud.csv file.




Once you've done that, you can run the second command within EMS for it to work well. 

Winthrop Australia provides IT Support and Consultancy in the Perth area.

21 November 2016

Un-hide users from GAL

Exchange 2007
I am in the middle of running an Office 365 migration and during this migration, it's required that I un-hide the users from the GAL (disabled users) in order for O365 to recognise them and migrate them.

My client wants the disabled user's mailbox migrated and then converted to a Shared Mailbox in order to maintain an archive of the mail.  In order to do this, we need a list of all the mailboxes that are hidden, then we can hide them.  Once we're done we can use that same list to hide them again.

Generate CSV for all mailboxes hidden from the GAL

Get-Mailbox | Where {$_.HiddenFromAddressListsEnabled -eq $True} | Select Identity, HiddenFromAddressListsEnabled | export-csv c:\HiddenFromGAL.csv

Set $Users parameter

$users = import-csv C:\HiddenFromGAL.csv

Un-hide the hidden users

Foreach($_ in $users) {Set-mailbox $_.identity -HiddenFromAddressListsEnabled $false}

This will then allow you to perform the migration (in this case I am doing a Staged migration) without O365 failing to find the user accounts.  Once you're done, simply repeat the last two stages. The final stage, change $False to $True.

Contact Winthrop Australia to find out how we can provide you with IT Support in Perth.

Office 365 Migration | Unable to create endpoint | Unable to connect to remote server

I recently was running through an Office 365 Migration (Staged) where I was trying to create the Migration Endpoint.  As I was going through the configuration I was getting the following error message:

After manually typing in the details, I got the following message:


This one was a tricky one as I was running the test through the 'Test Exchange Connectivity' website, and it was all passing without issues.  So it wasn't an autodiscover issue from what I could see. 

After looking into this one for a few hours and not being able to find what the issue could be, I came across an issue with the Autodiscover and IPv6.  When I was on the on-prem exchange server and I tried to ping the FQDN, it resolved an IPv6 address, even though this was disabled on the NIC.

To get around this, I had to edit the Hosts file on the server itself to resolve the IPv4 address when I pinged the server name, and hte FQDN.  After doing this, when I went through the process to create the new migration point, I was able to get past this point and it discovered the server details automatically. 


16 November 2016

WSUS downloads slow | BitsDownloadPriorityForeground

Recently I configured WSUS to download updates (about 150GB worth) and I noticed it was taking a very long time to download.  This little tip will save a lot of time waiting for the download to finish.

Run the following command on the WSUS Server in PowerShell:

(get-wsusserver).GetConfiguration().BitsDownloadPriorityForeground

This will show you whether the setting has been enabled or not.  We want it to say 'True'.

It will most likely say "False" if you're experiencing slow downloads.  Once you've confirmed this, type the following to set this to 'True":


  • $Config = (Get-WsusServer).GetConfiguration()
  • $Config.BitsDownloadPriorityForeground = $True
  • $Config.Save()
Once you've used these three commands, run the first command again to confirm that this has changed over to 'True'.  There's no need to restart the services.  You should see that the downloads will speed up now.

Contact Winthrop Australia to find out how we can help you with your IT Support needs.

19 October 2016

PXE Boot | Boot into WinPE then Immediate Restart

I recently came across a little issue where I was PXE Booting a machine into the WinPE environment to start a Task Sequence to image the machine.  Once it had loaded and it had passed the "Loading Network Settings" window, it immediately restarted.

The most common cause for this is that you don't have the correct NIC drivers.  I rebooted again, then pressed F8 which brought me to a command prompt.   Once I was there, I waited until  it went past the "Loading Network Settings" page, then ran an IPCONFIG to see whether I got an IP address.  In this case I did.  That means it wasn't a NIC driver issue.

The next thing to check is the BIOS time.  Make sure this is accurate.  In my case, this wasn't accurate at all.  Once I reset this,  I rebooted again into WinPE, and the Task Sequence started without any problem.

A quick 5 second fix could help you save an hour of troubleshooting!

14 October 2016

OSD Task Sequence Failure | 0x80072EE2 | Network Connectivity Issues

Recently when I was out at a client, they were running OSD (Operating System Deployment) on a particular model of machine (Dell Optiplex 7040) and it started throwing up error messages during the "Download Operating System" step.  The error message was 0x80072EE2.

When pressing F8 to check smsts.log, there was nothing there out of the ordinary.  The only interesting thing I could see there, was that it hadn't added in the latest log data yet, so as far as it was concerned, there was nothing wrong with the OSD.

I checked my IP, which was fine.  I tried pinging the SCCM server (and any other server on the network) and noticed that I was getting a lot of packet loss.  This explains the issue that I was getting.  Essentially, the network connectivity would be completely find up until a point where it would start getting massive packet loss, and then fail during the download.

I ruled out a physical issue with cabling, patching and the device by trying different network ports on the wall, and also trying different Dell machines (all 7040 however).  I received the issue on all of the machines, virtually at the same point in the Task Sequence.

I attempted to inject different drivers into the Boot image, but that didn't help either.  I was happy that this wasn't a driver issue, and it wasn't a physical networking issue.  The look continues!

I added in three Task Sequence Variables into the beginning of my OSD Task Sequence.  The idea behind these was to make the deployment less delicate, and to continuing working through the TS  if there's some packet loss etc.

The variables I added are the following:





After adding these TS Variables in there, I restarted the machine and went through the OSD sequence.  This time it completed without any issues.

Essentially the issue was that with the drivers and WinPE versions on this particular  machine, the NIC was a bit flakey.  Packets were dropped etc which originally caused it to fail.  These variables just told the Task Sequence to be less particular when it comes to timing out.  There will still be that  intermittent packet loss when doing the TS, but this time it won't cause it to fail.

06 October 2016

SCCM | Editing Object | Cannot edit the object, which is in use by ‘’ at Site ‘

Recently I have been working on a client's SCCM server, and it has been crashing a lot.  The problem with crashing, is that it doesn't update SQL to tell it that the item is no longer in use, therefore it remains 'locked'.  This means that if you try to open the object (in this case it's a boot image), it will say "Cannot edit the object, which is in use by <username> at site <sitename>.

You can try to resolve the issue by clicking 'retry edit', but it usually fails.

In order to get around this, there are two SQL queries that you will need to do which will allow you to edit the object immediately.  They are the following:

select * from SEDO_LockState where LockStateID <> 0

and

DELETE from SEDO_LockState where LockID = ‘<LockID of the record identified in the previous query>’

That's it!  Once you've done that, you should be able to go into the ConfigMgr console and open up the object that was previously locked.

29 September 2016

Windows 7 hanging on 'Checking for Updates'

Recently I've had to install a fresh copy of Windows 7 in order to build up a SOE for a client.  The first thing I realised is that there's an update for the Windows Update agent itself.  DON'T DO THIS UPDATE!  Whatever you do!  

This will update the Windows Update to version 7.6.7600.320.  There are many issues that  have been documented with this version.  The most important I would say is the fact that you suddenly can no longer search for Windows Updates.  If you check for new updates, it will sit there saying "Checking for Updates" indefinitely.  

When imaging a new machine, ensure that you don't have it connected to the network, and when it asks to install updates, click "Ask me later".  Once you've done that, you will need to install two Windows Updates.  These you will need to download from another computer and it would be the best idea to copy it over through USB drive.

They are the following updates:

1. KB3020369

2. KB3172605


Make sure you install these updates in that order as well.  Once you've installed these updates, reboot your computer and then you will be able to go through and start downloading Windows Updates like normal. 

15 September 2016

Cumulative Update for Windows 10 Download Hanging | KB3189866

Microsoft has recently released the Cumulative Update for Windows 10 Version 1607 (x64) which a lot of people have been having issues with.  The update is hanging when your workstation tries to download it.


Microsoft are currently looking into getting this resolved, but in the meantime, you can manually download this update from the following location:

http://go.microsoft.com/fwlink/?LinkID=116494&updateid=0405fe13-e4d5-4aff-8cf1-0500f5f673f7

Manually install the update, reboot and you're good to go!

You may need to restart the Windows Update service once you've rebooted if it thinks that it's still downloading the update.

External address pointing to internal server on a different port | URL Rewrite | Server Farm

This post continues from my previous post which outlines the same requirement, but without the use of Server Farms in IIS.  In this post, I will cover the rules required to achieve this whilst using server farms.  It is a different process because the use of Server Farms automatically creates ARR and URL Rewrite rules, so we have to work in with that.

Required Outcome
I have a website gis.domain.com.  This is this forwarded to the gis.domain.com server farm which contains this website.  It is a secondary IIS server.  TCP80 has the primary website.  There is a secondary website called webapp which is accessible on TCP8080.

My client would like to access gis.domain.com/webapp and have it re-write to <servername>.domain.local:8080.

To achieve this, all the work is to be done on the primary IIS website which maintains the Server Farms.

Rule #1
This rule is automatically generated when you set up Server Farms.  It will be called ARR_gis.domain.com_loadbalance




We need to amend this rule to show the following:




I have highlighted the sections that you will need to change from the default rule.  The rest of that rule can be left as-is.

Rule #2
The second rule will need to be created from scratch.  Create a new Blank Inbound Rule:

Add the following settings to that rule.:


Note: the Rewrite address is an internally accessible address only.  This is fine as the server is acting as a reverse proxy and will route the traffic internally on the new port.

Ensure that the Stop processing setting is marked as False for both of the rules.


And that's it folks.  That's all you need to get the rules working and to allow forwarding to different websites on different ports when you have a Server Farm. 

External address pointing to internal server on a different port | URL Rewrite | No Server Farm

This post is related to URL Rewrite on an IIS server which does not have server farms enabled.  I will create another post to cover the same process with Server Farms enabled as it's a completely different requirement.  Click here to find out more about Server Farms.

Recently a client has asked us to look into URL Rewrite and Application Request Routing to see if we're able to achieve the following:


  • Access externally facing web address: subdomain.domain.com/webapp on port 80
  • ARR and URL Rewrite then forwards this to <servername>.domain.local on port 8080
The easiest way to show you how this was achieved would be by just showing you the web.config file.  This is the file that's located under the Default Web Site which defines all the settings with IIS that we will be changing.  You will simply have to change all the items I've labelled in bold and underlined, to suit your setup, and then you should be good to go.

I will also explain each bold/underlined item under this web.config file so you know what you're changing. 


<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <system.webServer>
        <rewrite>
    <rules>
        <rule name="Reverse Proxy to webapp" stopProcessing="true">
            <match url="webapp" />
            <action type="Rewrite" url="http://ca.atlab.local:8080" />
        </rule>
    </rules>
            <outboundRules>
                <rule name="Outbound Rule - URL Rewrite" preCondition="IsHTML">
                    <match filterByTags="A" pattern="^/(.*)" />
                    <conditions>
                        <add input="{URL}" pattern="webapp" />
                    </conditions>
                    <action type="Rewrite" value="/{C:0}/{R:0}" />
                </rule>
                <preConditions>
                    <preCondition name="IsHTML">
                        <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" />
                    </preCondition>
                </preConditions>
            </outboundRules>
        </rewrite>
    </system.webServer>
</configuration>

  • WEBAPP
    • Match URL="webapp"
      • This is the section where it looks for the trigger to run this rule.  If you don't have "/webapp", then it won't rewrite your URL and redirect you to the new location
  • http://ca.atlab.local:8080
    • action type="Rewrite" url="http://ca.atlab.local:8080"
      • Once you have triggered the rewrite by adding "/webapp" to your URL address, this is the location that it will forward the traffic to essentially.  In this case it's an internal server, but it can be any location (external or internal) on any port.
  • WEBAPP
    • <add input="{URL}" pattern="webapp" />
      • This is the ARR section of the config.  This should be named the same as the first section.

Configuration file is not well-formed XML | IIS

This one is a nice, easy one which I thought I'd post.

If you are changing rules or any config in IIS and you're doing it through applicationhost.config, there's a chance that you're going to corrupt the file and get the following error message:


Have no fear!  You can restore this file from a backup which is located in the following location:

C:\inetpub\history

Simply check the date that it was backed up and you can restore it from there.



Just copy and paste the applicationHost.config file and restart IIS and away you go!



Intel AMT: Configuration | AMT Status: Detected | Not Externally Provisioned

Recently I have been trying to setup and configure Intel SCS to leverage Intel AMT features through SCCM 2012 R2.  I've done this before and it's worked fine, but for whatever reason, with this particular client, SCCM is failing to detected that the workstations have been configured, and that AMT is Externally Provisioned (thus not allowing me to control the power etc).

After running the Intel AMT: Configuration Task Sequence (which works by the way), I get the following messages in the AMTOPMGR.log file:


I've tested that I can log into the web GUI with the admin credentials and that's all fine, but for whatever reason, I'm getting AMT Status is 1 which translates to 'detected', rather than 'externally provisioned'.  I checked all my settings and everything had been configured accurately.

The machine account was in the appropriate ACL which was mentioned in the SCS profile (that's a big one if you haven't done that).  There was nothing online which actually related to the issue I was having, so I logged a ticket with Intel themselves.  After a couple of hours of troubleshooting with an Intel guy from Oregon, US (who was very helpful) we basically came to the conclusion that we couldn't find out what was actually causing this error message.  Everything had been configured correctly.

With the end goal in mind and wanting to be able to start up lots of computers, or a single device using the AMT Wake-Up feature, we comprimised and used a third party application called MeshCommander.  This application is installed on the SCCM server and essentially provides a whole lot more functionality to the Out of Band Management through SCCM.

To obtain the AMT Functionality, run the Intel AMT: Configuration Task Sequence as per ususal.  Then run the Discovery AMT Status like you normally would.  It will show up as Detected which is fine.  From here, you can right-click on a device and click on the MeshCommander option:


Select Kerberos and TLS.


Once you have done this, and provided there's nothing wrong with your AMT Configuration TS, it will allow you full access to the Intel AMT section of that workstation.  Allowing you to power the device up and use Serial over LAN etc.


If you're wanting to power up an entire device collection, simply right-click on that collection and select the MeshCommander Option:



Whilst I was pulling my hair out trying to get this one resolved, I'm actually grateful that I had this issue, because the solution that Intel suggested to achieve the overall outcome seems to be a lot better than the initial solution of simply using the Power-On feature through Intel SCS and SCCM.

Overall, this is actually a much better way to control the machines through Intel AMT.  I highly recommend it.

09 September 2016

Intel SCS/SCCM | Intel AMT: Configuration Task Sequence | Failed to parse the XML file

I have recently been setting up Intel SCS/AMT configuration with SCCM 2012 for a client of mine. It's all going fairly smoothly but I noticed that when testing the Intel AMT: Configuration Task Sequence, it's failing (across multiple test machines).

Checking the SMSTS.log file, I can see the following error message:

"Failed to parse the XML file. Possible reasons - the file does not exist or access to it is denied; the file contains incorrect parameters; incorrect or missing encryption password/parameter"

At the end of this error message, it references the .XML file that I have setup as part of the configuration requirements.  

I checked the following

  • Location
    • Confirmed the .XML file is in the same location as the ACUConfig.exe file.  If it can read the one, it's reasonable to assume it can read the other.
    • Ensured that this had been packaged correctly and the distribution points had been updated.
  • Permissions
    • Ensured that permissions on the files and the folders allow the service account to access these files.
  • Password
    • Confirmed the password was correctly added into the configuration.bat file.
  • Decryption
    • Confirmed that I could access this .XML file using the password that had been added to the configuration.bat file.
I then decided to go through the ACU Wizard again to create a completely brand new .XML file.  This was just to humour the issue to see whether it had something to do with the file itself.  Needless to say, we got the same issue.

The next thing I tried to do was run the SCSEncryption.exe file manually to see whether it could decrypt the .xml file. This was successful:


So this confirms that decrypting the file isn't the problem and that the password is in fact correct. Nevertheless, it's still failing with the same error message.  The last thing I tried was to copy the ACUConfig.exe, the associated DLL, the XML and the Configure.bat files to a local machine (Desktop) and then run the configure.bat file from there.  This means that it will be referencing the file that I 100% know is there.  When doing this, I still got the same error message:


To work around this and to get this up and running (which obviously was the priority here), I improvised and actually leveraged the decrypted version of the XML file.  I ran the SCSEncryption.exe file against the XML file which then replaces it with the decrypted version.  I edited the Configure.bat file to remove reference to decryption and the password.

.\acuconfig /output console /output file ConfigAMT.log /verbose ConfigAMT ".\XMLDocumentName.xml" 

Taking the decryption completely out of the equation, I updated the Task Sequence, and then ran it again.  This time it was successful!  

I'm sure re-installing the add-on and re-creating all the files would have also worked, but that wasn't an option in this case.  Whilst the XML file is no longer encrypted, you can use NTFS permissions to ensure that only the service accounts can access and view this file.  If you do this, it should be no different to if you left the XML file decrypted, except now it works! 

07 September 2016

Asset Intelligence | SCCM | Expired credentials/certificate/token | Need to re-provision online account

This one is a nice and easy one.
If you're using SCCM 2012 and you're wanting to leverage the Asset Intelligence site role, you will most likely encounter the following error message if you haven't been assigned a certificate from Microsoft.


When you go through the initial setup, it will ask you to add the location of a .pfx file which would have been supplied by Microsoft.  In order to resolve this and allow SCCM to connect to the Microsoft Database, you will need to obtain a certificate.  This can be downloaded from a Microsoft Hotfix (https://support.microsoft.com/en-us/kb/3060648).  You can also obtain a certificate from your Microsoft account rep.  

You will need to extract these files and then you will find the certificate file.



Once you have this certificate, you will simply need to add it into the properties for the Site Service Role:


After you have updated this setting so it's using the certificate, you will then need to disable and then re-enable the Asset Intelligence Sync point.  Simply un-tick this, apply, then re-tick and apply again.