TechEd Berlin 2010 Day 5

Posted: November 12, 2010 in Uncategorized

Wel, the last day at TechEd 2010 but I still had some great sessions today. The first session was: Sysinternals Primer: Process Explorer, Process Monitor, PsExec. I guess the Sysinternals suite does not need any introduction anymore. Any Windows administrator should know about these tools. Question is those everyone admin knows how to use them? Well, I was about to find out for myself if this was the case. I do lot of support and I have used this tools quiet often. 3 tools would be covered today, which after seeing the title of the session, is a give away Smile For those of you who don’t know where to find the Sysinternals tools, Use Google Bing or goto http://www.sysinternals.com which will redirect you to correct TechNet page. So first tool discussed in this session, ProcessExplorer. It’s actually a very advanced version of the default TaskManager of Windows. You can actually replace the TaskManager with the ProcessExplorer. An update, in version 14 Disk and Network I/O will also be visible in the ProcessExplorer. Every process has a color which off course have a meaning. Light blue are processes running with the same account as ProcessExplorer, pink are services, yellow are .NET framework processes, green is processes that have started and red are closing processes.
One of the great feature in ProcessExplorer is that, lets say that you have a server that has a process that running and taking up all the CPU resources, you can use ProcessExplorer to set that process affinity so it will only use the CPU core that you specify. This way you can troubleshoot the server/computer easier because the CPU has now a lower workload.
Process Monitor is the next tool being discussed. With this tool you can view what a process is accessing , which file(s), which registry key(s). Once you have done your capture you can search for something like access denied during an action that a process was executing. Or you can simply use it to find to which registry key or file a process is accessing. There are a filters which you can apply, and you should because if you start Process Monitor, it will pickup a lot of information and if you don’t use filter or marking tools available in the tool, you would simply have to much information to manage!
PSExec execute processes on remote computers. You can give a set of credentials to be used on the remote pc but be aware, PSExec is from the entire Sysinternals suite, the only tool that places your password unencrypted on the network. All the other Sysinternals use standard Windows authentication. So take that in consideration if you are using this tool. Also don’t forget the parameter /accepteula if you are using PSExec to run commands on remote computers, otherwise the EULA will be prompted to which who have to agree before being able to use it.
Good tip btw to install your Sysinternals suite is this; create a folder in your Program Files called Sysinternals, extract the Sysinternals suite in there, unblocked it and add this folder to your path. This way you can run your tools from any location and only admins can make changes in this folder, normal users do not have access to this folder.

All You Need to Know About Certificates from Templates to Revocation. My first session on TechEd 2010 that was given by John Craddock. For those of you who had the chance to see John, you’ll know that I’m excited about this, he a great speaker . Certificates always have been important and are getting even more important with the cloud services being deployed. This should be a great session. John makes sure that everyone is in the right session by explaining that people that know what certificates are and how they work, it’s probably not of good session, otherwise it’s going to be a level 400 session. I’ staying Nerd smile
When a certificate is presented to a client, it’s up to the client to verify if the certificate is valid or not. It is up to the client to verify the certificate against to revocation list or not. Since we are talking about certificates, you should have implemented a good security to secure the certificate, otherwise certificates are useless. At would be to same as given someone a passport without some kind of verification procedure is done… When implementing PKI, you should set it up multiple times in a lab in such a way that you can script it. This would reduce the chance of making mistakes during a manual (read as bad) installation of the PKI in production environment.
Since this session is about certification revocation a topic touched is that your CRL can be published in AD, but what about external users. It’s better to publish your CRL as a web resource and preferably a High Available . During the session John asked how many people in the audience have a root certificate server that is also the certificate issuer. There were quiet a few I’m afraid, you should only use this in labs for quick test, but NOT is production. He asked the same question for a 2 tier CA infrastructure, so a root certificate server and a second certificate server that is the issuer, it was like 30% of the crowd. Only a few people with 3 tiers and no one had 4 tiers (have never seen these either). John’s experience is that lots of customers are moving from 3 tiers to 2 tiers since this easier to manage.
Now when it comes to certificate enrollment, your client will create a certificate request, encrypted that request with his private key and sends this over to the certificate server signed with the public key. When the request gets at the server you should properly validate the request. This is the place where you should create your policies for certificate approvals. The certificate request needs to be verified, is it for a webserver, client authentication … Depending on what the certificate is going to be used for a correct certificate template should be used to issue the certificate. There are a lot of certificate template but you can still create your own. Make sure that you have the correct OID’s (Object Identifier) for your certificate use. I created a blog about this a few months ago on how to do create a certificate template for SCOM-agents, be sure the check it out. It’s not rocket science but you need to understand what your doing Smile.
If your are testing your environment and it involves a Windows 7 client, take note that Windows 7 caches the CRL. This means that if you revoke a certificate and will test this immediately on a Windows 7 client, it would still work. You need change a registry setting for this or wait like a long time. The registry setting for this is; -setreg chain\chaincacheResyncFileTime @now
Great session, it didn’t feel like a level 400 to me but a refresh on certificates by John Craddock is great!

So the last session of TechEd 2010 for me will be How to (un)Destroy your Active Directory. First advice in this session, Whenever you fix a problem in your Active Directory, Take your time! Understand what you are doing, don’t simply go implementing some kb if you don’t know what’s is about. Check and RE-check your results, specially if is a replication issue. First thing u need if you need to trouble shoot a problem is documentation. I glad the speaker, Ralf Wigand btw, said this, and I second that. I’m an operations guy myself and there is nothing worse then trouble shooting some IT problem that is not properly documented.
Ralf is a consultant for Active Directory Services for 10 years and will show some examples of problems you can run into. One problem he bumped into with a customer who had replication issue in AD was that they disabled the DHCP client service. If you read the description of this service in the services console you know that the DHCP client service is responsible for registering the IP address in the DNS. So if the IP of a DC on not properly registered in DNS, you’re bound to run into problems with replication. Another example was a customer who had a user that could not access a folder in a server. After checking in which groups this user was it appeared that the group that had access to that folder was a Universal group. Universal groups are stored in your Global Catalog server (GC). From the server were the problem existed you couldn’t ping the server using the FQDN (Fully Qualified Domain Name). So if you are using Universal groups, make sure that you have enough GC’s available.
Next topic was regarding the time sync of DC’s in virtual environments. There are actually 2 ways there; one is late the virtual machine of the DC sync is time with that of the host, the other is use the standard windows time sync service. You have to choose, do not use both because you will run into problems someday. Best practice, disable the host time synchronization of the VM and use standard windows time sync. I did me research ok I guess cause that’s how I implement this Smile
Another best practice, but that should be common knowledge, is that you should never edit the Default Domain Policy. Create a new policy and edit that one. I haven’t ran into any situation were I needed to change this policy. If you ever run into a situation were the Default Domain Policy is messed up, you can run dcgpofix to restore this policy.
Lingering Object come to live when replication has not been running as normal and certain AD objects have been deleted. Let me try to explain a bit more, when you delete an object in AD, it is not really removed, it is flagged (tombstoned) and has a timestamp when to delete but it is still replicated as usual and deleted when it is time without further notice. So if it‘s not replicated while tombstoned or a backup is restored from before it was tombstoned the object will exist forever. If you run into a problem with lingering objects you can remove it with reference to a „clean“ DC. The command for this is; repadmin /removelingeringobjects

So after a week of TechEd, I did 18 sessions, 2 labs and ran about 17 km in Berlin. Hopefully I can get to go to TechEd again because it’s a great event. I met a lot of new people and bumped into a few friends.

So after yesterday’s party from 1E in the Puro Lounge, which ended by 3h00 AM, I was glad I didn’t plan any breakout session this morning. Do I need say that it was a good party if it ended that late Sarcastic smile. Before getting into the Hands On Labs (HOL) I paid a visit to the stand of E1 in the exhibition hall to thank for the great party last night. Apparently they had a after party which lasted until 5h00 AM so they were pretty tired.
I went to the HOL area and starting some labs around System Center Configuration Manager vNext. I’m not a Configuration Manager guy but would like to develop some general knowledge about it. What I did see is that the collections have been removed from the product. Hopefully I can get to work with this product more in the future.

Time for another breakout session; Microsoft System Center Virtual Machine Manager 2008 R2: Advanced Virtualization Management.  On my way this session I bumped into Alex De Jong. I had Alex as a trainer for SCOM and Hyper-V. He does lots of interviews with speakers here at TechEd, these interviews are posted on TechNet Edge site, definitely worth checking out! Actually he was searching for a guy that has been to 20 editions of TechEd, it wasn’t even called TechEd at that time! Surprised smile
Getting to my next session, I should have known that this would be a session where PowerShell would dominate strongly. When the question was asked who does PowerShell automation for VMM, only 5% of the room raised their hand. For those who do not know this yet, like Exchange and Lync, the SCVMM console runs on top of PowerShell. The SCVMM GUI has a great option which shows you the PowerShell code it wil execute depending on the actions you have chosen. By using this code you have a good starting point to start PS automation. There were some other topics then PS in this session, like SP1 for Windows Server 2008 R2 which brings the Dynamic Memory option. Before you can use this make sure that you upgrade the Integration Services for those VM’s.
It was amazing that not many people in the room know the PRO feature (Performance and Resource Optimization) in SCVMM. That’s really a shame knowing that the PRO tips actually come from the SCVMM management pack in SCOM (yeah, I like SCOM Hot smile). So for those of you who don’t know what PRO is, I’ll explain in short. PRO tips will tell SCVMM to migrate VM’s automatically when a Hyper-V host is under heavy stress, so basically it’s nothing more then dynamic VM’s management for your Hyper-V cluster.
A handy tool is VMM Configuration Analyzer 2008 R2, which is a free and useful tool that you can use the troubleshoot when PRO tips are not being applied.
A best practice tip to take home; if you want to live migrate the VM with SCVMM installed in it and this VM is High Available, use the Cluster Administrator console to Live Migrate this VM. If you should live migrate this VM from within the SCVMM console the behavior is unpredictable.

My last session today was: Under the Hood: What Really Happens During Critical Active Directory Operations. This was an interactive session where admins can ask question towards to speaker who will try to clarify or help the admins out. So I’ll try to give you a overview on some of the covered topics:

  • Forestprep updates the AD shema but it does not change the security rights, so it only add information in the AD .
  • Domainprep does change permissions. In Windows 2003 they changed infact the permissions to secure the AD better by limiting what a anonymous access account can view.
  • Computer accounts that are not used in AD, disable or delete them or join the computer so that the password is changed every 30 days, otherwise this could be a security issue because someone might access your network with this computer account.
  • Automatic site coverage is a mechanism for scenarios were a site does not have a domain controller, a domain controller from a other site will register itself is a domain controller for that site. The mechanism is best on which DC is closest to that site (based on site link cost)

Today no parties, might go out with some peer to grab a meal and a beer and had to bed early, because after the convention is done, we have a 8 hour drive back home.

3th day at TechEd 2010 in Berlin. I got up at 5h45 AM to go out for a short run in Berlin before going to the conference center. We’ve doing some sightseeing in the evening  the last 2 days so I know some routes at this time. A quick 6,5 km, nothing better to clear the brain after yesterday’s session and a few beers . First session planned for today was Microsoft Exchange Server 2010 SP1 Upgrade and Coexistence: Questions and Answers From Previous Versions of Exchange. It was a interactive session on upgrade scenarios for Exchange 2010. I’m not an Exchange implementer but I picked up some good tips and found out what the look for during Exchange upgrades. An example is this site; https://www.testexchangeconnectivity.com/ You can use this site to test your Exchange external connectivity configuration. It’s even recommended that you use this site whenever you make any changes consider external connectivity in your Exchange environment.
My second session  I attended was given by Ilse Van Criekinge and was about the new OCS-server: Microsoft Lync Server 2010 Management, Administration, and Delegation. She showed lots of PowerShell stuff during the demo’s which (again) indicates that PowerShell is THE way to manage your servers today and for future releases. If you do not use PowerShell, make sure you pick it up because all Microsoft products will have PowerShell integrated and like in Exchange, not all configuration can be done in the GUI. Also as in Exchange, and again, this will be for all Microsoft products in the future, the GUI uses PowerShell CMD-lets in the background. Lync Server 2010 is no exception to this and uses the PowerShell is 2.0, so remote PowerShell is there for you to use! That being said, the GUI for Lync is web-based and makes use of the SilverLight technology. If you know that I’m a pro-SCOM IT-dude, you’ll know that I’m happy the say that Lync Server 2010 already has a MP for SCOM 2007 R2  .To install a Lync server you need use the Topology builder, which connects to central management store (CMS) and will configure the Lync-servers via file transfer (SMB).
I didn’t attend any session during lunch today but I made my way to the exhibition hall because I only passed there briefly yesterday. I had a nice chat with the guys from JalaSoft who offer a Operations Manager console for Mobile devices. For the moment only for Blackberry and Windows Mobile devices are supported. For iPhone the console should become available by the end of the year. Future versions should also support connecting to multiple SCOM infrastructures, but that is still under development. The software, which is called Wings by the way, is a service running in the Operations Manager MS-server. I’ll need to play with that later when I get back home…
My next session: Troubleshooting Group Policy. This session was given by Jeremy Moskowitz, the driving force behind GPAnswers.com (great site!). I picked up some good resources to troubleshoot some GPO problems like the GPO-tool. GPO troubleshooting needs to be done at 2 parts, one is the AD replication and the 2nd is SYSVOL replication. To troubleshooting SYSVOL replication, you can put a TXT file in the SYSVOL folder to check the replication to other servers. A other great tool for troubleshooting GPO problems is Gplogview which can be downloaded from Microsoft site to do advanced troubleshooting on a client. To troubleshoot GPO preferences you should use the Application log in the event viewer. If you can’t find the problem using the event log, you can always enable Tracing for final troubleshooting. 
Next planned session for the day was System Center Data Protection Manager 2010 in the Datacenter given by Jason Buffington. I attended a session of Jason last week when I went to the System Center day in Belgium, great and passionate speaker. When he checked how much people are using DPM 2007 or 2010 almost half of the room raised their hand , meaning that DPM is a great and commonly used product in the System Center portfolio. One of the most things you need to know about DPM agents is that the VSS writer is written by product team, for example the VSS for Exchange is written by the  exchange guys. These means that there is only one type of agent that needs to be installed, as with other backup products you need to buy and install a agent for file backup, SQL, exchange … VSS has 3 components, the Requester, the Writer and the Provider. The way a backup works is like this, the backup server talks to backup agent. The agent talks to VSS requester and says give me want you got, at this point the agent does not know what kind of data is going to be backup-ed. Requester talks then to the Writer, what happens then depends on the data (exchange, sql, file …). Writer send data to requester which sends it to the agent which on his turn sends the data over to the DPM server. Regarding client backups, the licensing is great. If you have the client licenses, the server is free! (reason for this is that server does not do anything). DPM 2010 has now the ability to protect a complete SQL instance, in DPM 2007 you had to add each new db that was created because DPM did not add this automatically (could be scripted however). Now with DPM 2010 if you select the entire instance to be protected the new created databases are automatically protected. If you add a volume to your DPM server, do not format them because DPM cannot use them, it needs block level storage. When the DPM server starts a backup is talks to the DPM agent installed in the host is going to backup. The DPM agent is then going to take to the VSS (Volume Shadow Copy Service). The VSS exist out of 3 components, the Requester, the Writer and the Provider. The DPM agent then talks to VSS requester and says give me want you got. Requester talks then to the Writer, what happens then depends on the data (exchange, sql, file …), remember that the VSS is not DPM related, it  depends on the product. Writer send data to requester which sends it to the DPM agent who on his turn send the data over to the DPM server. For Hyper-V VM on Hyper-V host the process is the same. You need to only install a agent on the host, there is no agent required in the guest VM. The Hyper-v Integration components install a VSS for this and take care of the rest (the same process as for other backups). The overhead of the backup data is removed by the agent on the Hyper-V agent on the host. Jason had a announce to make for DPM 2010; with a single DPM 2010 server you can now backup 3000 clients instead of 1000. I should play with this technology in my home lab Smile.
My last session today would be What’s New in Operations Manager Since R2. Yeah, I know, another SCOM session, but what can I say, I like this product! The agenda for this session had a lot of topics which indicates that SCOM is still growing rapidly. I’ll summarize the topics without going into details:

  • Default MP contains reports to find out which monitors are generating (too much) alerts
  • Bulk URL Editor to import mutiple URL’s at the same time. Tool is available on the installation media (part of the resource kit)
  • Service Level Dashboard 2.0 for OpsMgr R2.
  • Management Pack Authoring:
    • Trace Workflow is a tool you can use to do online tracing of workflow. It also available via the resource kit. NOTE: when the trace workflow is enabled it can have a small impact on performance.
    • There is a BPA in the Authoring console available to check your created MP.
    • The Autoring console contains a spell checker so you can check your MP before deploying it with a customer, I definitely can use that :p
  • You can generate reports for ACS for Cross platform
  • Visio Add-in enables you to create user friendly views to show the health state of the components. The components contains a link which opens the OpsMgr webconsole where, if sufficient rights, the user can perform actions, like mount a db.
  • In CU3 there is a view which shows the state of the agents being updated (after an update), in 2007 SP1 you needed to create this view yourself, but it wasn’t as nice as the new view.
  • Also in CU3 there is the Azure MP. The implement it you will be guided through a wizard which ask you some information and 2 RunAs accounts.

Off course every SCOM admin should know that you do NOT store your distributed applications in the Default Management Pack, create a new MP and store it in there.

This evening me and 2 other colleagues are invited to a party from 1E. The party will be held in a Puro Sky Lounge in Berlin which is on the 20th floor. It should have a great view over Berlin, I’ll let you know tomorrow how it was Winking smile.

So the first day of TechEd with breakout sessions, besides meeting with peers, this is what TechEd is all about for me. First session of the day was Advanced Storage Infrastructure Best Practices to Enable Ultimate Hyper-V Scalability. The session was given by someone from EMC but was not therefore EMC branded. It was given in such a way that the content will generally applicable, no matter what kind of storage vendor you will use. The session was orientated to deploy a private cloud where you would use the storage system for faster provisioning and deployment of your VM’s. In a normal way of deployment of Hyper-V VM’s you would use the SCVMM console (with the Self Service Portal) to provision your VM’s. What the guys from EMC had done was using snapshot technology and some Power Shell scripting to deploy VM’s which actually resulted in a much faster deployment of VM’s. The concept is that they create a golden VM, take a snapshot from the VM, import the disk, re-signature the disk and then add them as CSV’s to the Hyper-V cluster.During the demo movie the first 5 VM’s deployed by SCVMM were a lot faster but from thereon the snapshot/scripting solution was a lot faster.
Next was a level 400, so good thing I was awake at this time. Impact of Cloning and Virtualization on Active Directory Services. With the environments of today where everything is being consolidated into virtual machines this session was simple a most follow in my agenda. Several examples/situations were given were cloning could have some really (nasty) effects on your virtualized AD environment. Without going into to much details about this session, because this would be a very long blog, some topics that you should keep in mind or definitely should consider when cloning in virtualized (AD) environment are:

  • When a domain is created, it uses the computer SID to create to domain SID. This means if you create a VM, clone it, create a DC out of it, and you want to create a another DC with a child domain you need to do a clear install of a server or run SYSPREP on the clone. If not the domain SID would become the same and you will have serious issues.
  • So use SYSPREP if you are cloning!!
  • If a DC is demoted, the computer SID is regenerated

My 3th session today was Attack & Defense; Authentication and Passwords. During the session some live demo’s were giving on how to easily it is to take advantage of bad configured networks/servers the get access. The main message is that with applications moving into the cloud, certificates are being installed on the client and that you need to secure your clients because these certificates can be easily abused.
The next session was about SCOM, one of my most favorite System Center products. Introducing the Next Generation of SCOM. One the measure announcements was that in the new version the topology of Operation Manager has changed in such a way that there is no more RMS (Root Management Server) required which I’m sure more people will be pleased with. There are some new dashboard views which have multiple views combined, specially those for monitored network devices looks very slick! The new web console is now Silverlight based and personalization that have been done in the SCOM console are pushed back into the web console as well. The new version of SCOM will have the capability to monitor J2E and it will be an in-place upgrade from the current version of SCOM so that your customer investment is protected. The beta version should be available somewhere in Q2 of 2011, RC in Q3 of 2011 and should go RTM in Q4 2011 (a good reason to go next years TechEd) Smile
Next session: Small Business Server 2011 Standard. This should be release by the beginning of December 2010. To make it easier for customers there is now the option of being able to buy Add-On offers. In previous versions customer needed to choose between the Standard and  Premium versions (which was fairly great difference in pricing). For CAL’s (Client Access Licenses) customers only have to buy only extra CAL’s for Premium Add-ons for the users that connect to the SQL instance. Line Of Business (LOB) should best be installed on member server since the new version of SBS using Exchange 2010, which already creates a heavy load on the SBS box itself.
For the last session of the day I followed another Hyper-V track being: Disaster Recovery by Stretching Hyper-V Cluster across Sites. One of the first things that was being told there is that disaster recovery scenarios should be automated because people are not reliable. The technical part of the session was cut into 3 pieces, the network, the storage and the Quorum of the cluster. For the network you can configure some parameters so that if the sites are to far apart, you can change the settings for the heartbeat link between the cluster nodes to prevent from failover because a heartbeat was timed out over the WAN. Cluster Shared Volumes (CSV’s) must be on the same subnet so if you are going over a WAN, a VLAN should be configured. On the storage side of clustering across sites it’s best to have a hardware enabled replication, you should talk to your storage vendor about this because Hyper-V does not have technology like SQL DB mirroring or Exchange log shipping. Whether the synchronization is  synchronise or asynchronise  depends on your business need and much data you are willing to loose. Node & File Share Witness is the best solution for multisite clusters. For the Quorum of the cluster it’s a best practice to not put the Folder Shared Witness on the same cluster because you can loose 2 votes during a cluster failure which can cause the whole cluster to go down. Using PowerShell commands u can force the quorum when the nodes do not have enough votes but note that this might have a performance impact on the cluster.
To close of the day I went to a steakhouse in Berlin went 2 other colleagues and had a few beers, I’ll probably going out for a run tomorrow morning before heading towards to conference.

TechEd Berlin 2010 Day 1

Posted: November 8, 2010 in Computer and Internet

I did not get to go to the TechEd event in Berlin last year but this year I got a green light. Because it was a last minute deal and almost every hotel in the neighborhood was fully booked my colleague and I found a hotel that was more than 6km from the Berlin Messe, where the TechEd event takes place. Also due to the just in time booking we did not go by plane to Berlin and went by the car instead. From Leuven to Berlin is about a 750km drive so after some short stops and lunch at Burger King we arrived in Berlin around 17h30.

First we want to the Messe to register ourselves and get our badge and bag. We noticed earlier on the site that if you go by car to Berlin end enter the center of the city, meaning within the S-bahne which is a subway, you need to get a environmental sticker on the car. This sticker can only be bought at certain points in Berlin. Luckily the information desk helped us out after doing some google-ing, so we went to get this green sticker. We needed to hurry up because the garage were we needed to get this sticker would close at 18h00. After arriving at this garage on 17h59 (really, that sharp :p) we needed to provide some information about the car and the sticker was created. This sticker can cost from 5€ up to 15€ depending on how "bad" your car is to the environment. We came to Berlin with a BMW 525 break with a 2,5l V6 engine in it, so we were thinking like; ok, probably like 15€ since the car has a CO2 emission of 176gr but surprisingly we needed to pay 5€. I guess Germans really do like BMW’s :p

After the sticker was applied behind the windshield we still needed to check into to our hotel. Once in the hotel it was almost 19h30 and we were getting hungry. Quickly unpacked our bags on headed out to some Italian restaurant. Back in the hotel I’ve setup me internet connection and checked my emails and favorite feeds (hey, I was offline the whole day!)

So first day at TechEd, not much done accept getting our badge, we did not see opening the key note session but tomorrow the technical sessions will starts which will certainly get more interesting.

In Operations Manager 2007 you can monitor hosts that are in a other domain or maybe in a DMZ. For this to be able to work you’ll need to create a certificate. If your working with a Operations Manager Gateway you’ll also be needing this.
The reason you need this certificate is that the Root Management Server (RMS) need to trust the agent installed on the host, and vice versa. The certificate will be used for the authentication of the agents, the Health service to be more specific. Creating these certificates is quiet easy if you have a certificate template so I’ll try to explain in this blog how you can create a certificate template for Operations Manager. Note that this will only work if your Certificate Authority is a Enterprise Root CA, this will not work with a Standalone Root CA.

First thing you need to know is, do I have a certificate authority in the network. If your implementing a SCOM in a unknown environment a quick way to check if there is a CA is to check in Active Directory. Open Active Directory Users and Computers and search for the group Cert Publishers, which is a built-in AD group. Be checking the members in this group you’ll be able to see which servers have the certificate authority installed. If the Cert Publishers group is not in AD, there hasn’t been a CA installed yet 🙂

Once connected to the server that has the CA role, start the Certification Authority console which should be found in the Administrative Tools menu. In the console go to Certificate Templates folder, right click on it and choose Manage and the Certificate Templates Console will be launched. In this console you can see all if the existing certificate template that already exist in your environment.

Note: There is a difference if the CA role is installed on the Standard or Enterprise edition of the Windows OS. The Enterprise edition has much more certificate templates out of the box.

In the middle pane of the Certificate Template console you need to look for the certificate template called Computer. Once found right click on it and select Duplicate Template. If your CA is installed on a Windows Server 2008 you’ll get this window:

image

Select Windows Server 2003 Enterprise. If you would select Windows Server 2008 Enterprise you could bump into the issue that you won’t be able to see the template using web-enrollment and won’t be able to use the certificate template for OS’s pre-VISTA. So stick to the default setting in the case and click OK. If the CA is Windows Server 2003 or earlier you won’t get this window.
The Properties window for this new certificate template will be opened. First thing you’ll need to do is give a name to the template that in meaning full for future usage, something like OpsMgr Certificate Template. Because you need to export the certificate later on you need the select the Allow private key to exported which can be found under the  Request Handling tab. In the Subject Name tab you need to change the setting to  Supply in the request.
To be able to enroll the certificate via web-enrollment go to the Security tab and change settings for Authenticated Users by checking the boxes for Enroll and Autoenroll. Offcourse this depends on the security that is required for your organization.
The next tab to check is the Extensions tab. If you used the Computer certificate template like indicated in the beginning of this blog it should be ok, but to make sure that your certificates will be useful by the OpsMgr agents later on check if the Application Policies have the Client Authentication and Server Authentication in them like shown in the picture below. If there not in there the certificates that you’ll create from this template cannot be used for Operations Manager. Well, they can be used but they won’t work … :p

image

If these settings have been selected click OK and the certificate is created. Open the certificate template to settings again before publishing the template. Before you can use the certificate in web-enrollment you need to publish it. Close the Certificate Templates Console and go back to the Certification Authority console, right click on the Certificate Templates, go to New and select Certificate Template to Issue. A new window will open where you can select the certificate template that you have just created, click OK to confirm and that’s it. You should now be able to request OpsMgr certificates via web-enrollment. If the certificate template is not yet available you might want the force a GPO update or give it some more time, the certificate template is probably not replicated yet. I’ve seen it take some time even when the CA role is installed on the same box as the DC.

All that is left to do now is connect to the CA web-enrollment site from your RMS and request your certificate, move it from the “user private store” to the “computer private store”, then export the certificate to a CER-format certificate and then import it using the MOMCertImport.exe which can be found on the installation media and bounce the Health service.
You’ll need to do these last steps on all of the servers that you’ll monitoring in the DMZ as well.

In the “old days” when you wanted to change the properties of a network connection you simply right clicked on the Network icon, that being in the Explorer, the Start menu or on the Network icon in the taskbar, choose Properties and you could access any type of network connection that was known by your system.
Now since Windows Vista Microsoft added the Network and Sharing Center in the Control Panel. As a result when you now right click any of the network icons mentioned above you go to …. yep, the Network and Sharing Center. If you want to manage your network connections in Vista and Server 2008 you have to click on Manage network Connections. They be honest, I never liked the extra step to get to my network connections windows. I was hoping that they would change this again in Windows Seven and Server 2008 R2 and they did! Now you have to click in Change adapter settings in the Network and Sharing Center. This opens a new window that contains all of your network connections. Not what I had in mind… 🙂

As most admins I like to use shortcuts to open the window I need without browsing through the Start menu, you know like; eventvwr, dsa.msc, compmgmt.msc, cmd … etc. Well, here is tip I use to open my network properties window without going via the Network and Sharing Center.
Open up the Run window by pressing the Windows-key + R, and then type in ncpa.cpl and hit enter et voila.

Who says Windows admins don’t use the command line :p