Wel, the last day at TechEd 2010 but I still had some great sessions today. The first session was: Sysinternals Primer: Process Explorer, Process Monitor, PsExec. I guess the Sysinternals suite does not need any introduction anymore. Any Windows administrator should know about these tools. Question is those everyone admin knows how to use them? Well, I was about to find out for myself if this was the case. I do lot of support and I have used this tools quiet often. 3 tools would be covered today, which after seeing the title of the session, is a give away For those of you who don’t know where to find the Sysinternals tools, Use Google Bing or goto
which will redirect you to correct TechNet page. So first tool discussed in this session, ProcessExplorer. It’s actually a very advanced version of the default TaskManager of Windows. You can actually replace the TaskManager with the ProcessExplorer. An update, in version 14 Disk and Network I/O will also be visible in the ProcessExplorer. Every process has a color which off course have a meaning. Light blue are processes running with the same account as ProcessExplorer, pink are services, yellow are .NET framework processes, green is processes that have started and red are closing processes.
One of the great feature in ProcessExplorer is that, lets say that you have a server that has a process that running and taking up all the CPU resources, you can use ProcessExplorer to set that process affinity so it will only use the CPU core that you specify. This way you can troubleshoot the server/computer easier because the CPU has now a lower workload.
Process Monitor is the next tool being discussed. With this tool you can view what a process is accessing , which file(s), which registry key(s). Once you have done your capture you can search for something like access denied during an action that a process was executing. Or you can simply use it to find to which registry key or file a process is accessing. There are a filters which you can apply, and you should because if you start Process Monitor, it will pickup a lot of information and if you don’t use filter or marking tools available in the tool, you would simply have to much information to manage!
PSExec execute processes on remote computers. You can give a set of credentials to be used on the remote pc but be aware, PSExec is from the entire Sysinternals suite, the only tool that places your password unencrypted on the network. All the other Sysinternals use standard Windows authentication. So take that in consideration if you are using this tool. Also don’t forget the parameter /accepteula if you are using PSExec to run commands on remote computers, otherwise the EULA will be prompted to which who have to agree before being able to use it.
Good tip btw to install your Sysinternals suite is this; create a folder in your Program Files called Sysinternals, extract the Sysinternals suite in there, unblocked it and add this folder to your path. This way you can run your tools from any location and only admins can make changes in this folder, normal users do not have access to this folder.
All You Need to Know About Certificates from Templates to Revocation. My first session on TechEd 2010 that was given by John Craddock. For those of you who had the chance to see John, you’ll know that I’m excited about this, he a great speaker . Certificates always have been important and are getting even more important with the cloud services being deployed. This should be a great session. John makes sure that everyone is in the right session by explaining that people that know what certificates are and how they work, it’s probably not of good session, otherwise it’s going to be a level 400 session. I’ staying
When a certificate is presented to a client, it’s up to the client to verify if the certificate is valid or not. It is up to the client to verify the certificate against to revocation list or not. Since we are talking about certificates, you should have implemented a good security to secure the certificate, otherwise certificates are useless. At would be to same as given someone a passport without some kind of verification procedure is done… When implementing PKI, you should set it up multiple times in a lab in such a way that you can script it. This would reduce the chance of making mistakes during a manual (read as bad) installation of the PKI in production environment.
Since this session is about certification revocation a topic touched is that your CRL can be published in AD, but what about external users. It’s better to publish your CRL as a web resource and preferably a High Available . During the session John asked how many people in the audience have a root certificate server that is also the certificate issuer. There were quiet a few I’m afraid, you should only use this in labs for quick test, but NOT is production. He asked the same question for a 2 tier CA infrastructure, so a root certificate server and a second certificate server that is the issuer, it was like 30% of the crowd. Only a few people with 3 tiers and no one had 4 tiers (have never seen these either). John’s experience is that lots of customers are moving from 3 tiers to 2 tiers since this easier to manage.
Now when it comes to certificate enrollment, your client will create a certificate request, encrypted that request with his private key and sends this over to the certificate server signed with the public key. When the request gets at the server you should properly validate the request. This is the place where you should create your policies for certificate approvals. The certificate request needs to be verified, is it for a webserver, client authentication … Depending on what the certificate is going to be used for a correct certificate template should be used to issue the certificate. There are a lot of certificate template but you can still create your own. Make sure that you have the correct OID’s (Object Identifier) for your certificate use. I created a blog about this a few months ago on how to do create a certificate template for SCOM-agents, be sure the check it out. It’s not rocket science but you need to understand what your doing .
If your are testing your environment and it involves a Windows 7 client, take note that Windows 7 caches the CRL. This means that if you revoke a certificate and will test this immediately on a Windows 7 client, it would still work. You need change a registry setting for this or wait like a long time. The registry setting for this is; -setreg chain\chaincacheResyncFileTime @now
Great session, it didn’t feel like a level 400 to me but a refresh on certificates by John Craddock is great!
So the last session of TechEd 2010 for me will be How to (un)Destroy your Active Directory. First advice in this session, Whenever you fix a problem in your Active Directory, Take your time! Understand what you are doing, don’t simply go implementing some kb if you don’t know what’s is about. Check and RE-check your results, specially if is a replication issue. First thing u need if you need to trouble shoot a problem is documentation. I glad the speaker, Ralf Wigand btw, said this, and I second that. I’m an operations guy myself and there is nothing worse then trouble shooting some IT problem that is not properly documented.
Ralf is a consultant for Active Directory Services for 10 years and will show some examples of problems you can run into. One problem he bumped into with a customer who had replication issue in AD was that they disabled the DHCP client service. If you read the description of this service in the services console you know that the DHCP client service is responsible for registering the IP address in the DNS. So if the IP of a DC on not properly registered in DNS, you’re bound to run into problems with replication. Another example was a customer who had a user that could not access a folder in a server. After checking in which groups this user was it appeared that the group that had access to that folder was a Universal group. Universal groups are stored in your Global Catalog server (GC). From the server were the problem existed you couldn’t ping the server using the FQDN (Fully Qualified Domain Name). So if you are using Universal groups, make sure that you have enough GC’s available.
Next topic was regarding the time sync of DC’s in virtual environments. There are actually 2 ways there; one is late the virtual machine of the DC sync is time with that of the host, the other is use the standard windows time sync service. You have to choose, do not use both because you will run into problems someday. Best practice, disable the host time synchronization of the VM and use standard windows time sync. I did me research ok I guess cause that’s how I implement this
Another best practice, but that should be common knowledge, is that you should never edit the Default Domain Policy. Create a new policy and edit that one. I haven’t ran into any situation were I needed to change this policy. If you ever run into a situation were the Default Domain Policy is messed up, you can run dcgpofix to restore this policy.
Lingering Object come to live when replication has not been running as normal and certain AD objects have been deleted. Let me try to explain a bit more, when you delete an object in AD, it is not really removed, it is flagged (tombstoned) and has a timestamp when to delete but it is still replicated as usual and deleted when it is time without further notice. So if it‘s not replicated while tombstoned or a backup is restored from before it was tombstoned the object will exist forever. If you run into a problem with lingering objects you can remove it with reference to a „clean“ DC. The command for this is; repadmin /removelingeringobjects
So after a week of TechEd, I did 18 sessions, 2 labs and ran about 17 km in Berlin. Hopefully I can get to go to TechEd again because it’s a great event. I met a lot of new people and bumped into a few friends.