Wednesday 25 April 2012

Short note on Tunneling

Tunneling is a way in which data is transferred between two networks securely. All the data being transferred is fragmented into smaller packets or frames and then passed through the tunnel. This process is different from a normal data transfer between nodes. Every frame passing through the tunnel will be encrypted with an additional layer of tunneling encryption and encapsulation, which is also used for routing the packets to the right direction. This encapsulation would then be reverted at the destination with decryption of data, which is later sent to the desired destined node.
A tunnel is a logical path between the source and the destination endpoints between two networks. Every packet is encapsulated at the source and de-capsulated at the destination. This process will keep happening as long as the logical tunnel is persistent between the two endpoints.
Tunneling is also known as the encapsulation and transmission of VPN data, or packets. IPSec tunnel mode enables IP payloads to be encrypted and encapsulated in an IP header so that it can be sent over the corporate IP internetwork or Internet.
IPSec protects, secures and authenticates data between IPSec peer devices by providing per packet data authentication. IPSec peers can be teams of hosts, or teams of security gateways. Data flows between IPSec peers are confidential and protected. The source and destination addresses are encrypted. The original IP datagram is left in tact. The original IP header is copied and moved to the left and becomes a new IP header. The IPSec header is inserted between these two headers. The original IP datagram can be authenticated and encrypted.
The tunnel is the logical path or connection that encapsulated packets travel through the transit internetwork. The tunneling protocol encrypts the original frame so that its content cannot be interpreted. The encapsulation of VPN data traffic is known as tunneling. The Transport Control Protocol/Internet Protocol (TCP/IP) protocol provides the underlying transport mechanism for VPN connectivity.
The two different types of tunneling are:
  • Voluntary tunneling: With voluntary tunneling, the client starts the process of initiating a connection with the VPN server. One of the requirements of voluntary tunneling is an existing connection between the server and client. This is the connection that the VPN client utilizes to create a tunneled connection with the VPN server.
  • Compulsory tunneling: With Compulsory tunneling, a connection is created between:
    • Two VPN servers
    • Two VPN access devices – VPN routers
    In this case, the client dials-in to the remote access server, by using whichever of the following methods:
    • Through the local LAN.
    • Through a internet connection.
    The remote access server produces a tunnel, or VPN server to tunnel the data, thereby compelling the client to use a VPN tunnel to connect to the remote resources.
VPN tunnels can be created at the following layers of the Open Systems Interconnection (OSI) reference model:
  • Data-Link Layer – layer 2: VPN protocols that operate this layer are Point-to-Point Tunneling Protocol (PPTP) and Layer 2 Tunneling Protocol (L2TP).
  • Network Layer – layer 3: IPSec can operate as a VPN protocol at the Network layer of the OSI reference model.
Tunneling Protocols Tunneling

VPN Overview

Virtual Private Networks (VPNs) provide secure and advanced connections through a non-secure network by providing data privacy. Private data is secure in a public environment. Remote access VPNs provides a common environment where many different sources such as intermediaries, clients and off-site employees can access information via web browsers or email. Many companies supply their own VPN connections via the Internet. Through their ISPs, remote users running VPN client software are assured private access in a publicly shared environment. By using analog, ISDN, DSL, cable technology, dial and mobile IP; VPNs are implemented over extensive shared infrastructures. Email, database and office applications use these secure remote VPN connections.VPN Tunneling Tunneling
A few of the main components needed to create VPN connections are listed below:
  • VPN services need to be enabled on the server.
  • VPN client software has to be installed on the VPN client. A VPN client utilizes the Internet, tunneling and TCP/IP protocols to establish a connection to the network
  • The server and client have to be on the same network.
  • A Public Key Infrastructure (PKI)
  • The server and client have to use the same:
    • ” Tunneling protocols
    • ” Authentication methods
    • ” Encryption methods.
  • Centralized accounting
Remote access VPNs offer a number of advantages, including:
  • Third parties oversee the dial up to the network.
  • New users can be added with hardly any additional costs and with no extra expense to the infrastructure.
  • Wan circuit and modem costs are eliminated.
  • Remote access VPNs call to local ISP numbers. VPNs can be established from anywhere via the internet.
  • Cable modems enable fast connectivity and are relatively cost efficient.
  • Information is easily and speedily accessible to off-site users in public places via Internet availability and connectivity.

Tunneling Protocols Overview

The tunneling protocols are responsible for the following functions:
  • Tunnel maintenance: This involves both the creation and management of the tunnel.
  • VPN data transfer: This relates to the actual sending of encapsulated VPN data through the tunnel.
The tunneling protocols are:
  • Point-to-Point Tunneling Protocol (PPTP)
  • Layer 2 Tunneling Protocol (L2TP)

How Tunneling Works

There are two types of VPN connections, PPTP (Point-to-Point tunneling protocol) and L2TP (Layer 2 tunneling protocol). Both PPTP and L2TP tunnels are nothing but local sessions between two different endpoints. In case they have to communicate, the tunneling type must be negotiated between the endpoint, either PPTP or L2TP, then more configurable parameters like encryption, address assignment, compression, etc. must be configured in order to get the best possible security over the Internet based private logical tunnel communication. This communication is created, maintained, and terminated with a tunnel management protocol.
Data can be sent once the tunnel is in place and clients or the server can use the same tunnel to send and receive data across the internetwork. The data transfer depends upon the tunneling protocols being used for the transfer. For example, whenever the client wants to send data or payload (the packets containing data) to the tunneling server, the tunnel server adds a header to each packet. This header packet contains the routing information that informs the packet about the destination across the internetwork communication. Once the payload is received at the destination, the header information is verified. After, the destination tunnel server sends the packet to the destined node, client, or server.

Point-to-Point Protocol (PPP)

It is very obvious that the PPTP and L2TP protocols are fully dependent upon PPP connection and it is very much important to understand and examine PPP a little more closely. Initially, PPP was designed to work with only dial-up connections or dedicated connections. If the data transfer is happening over PPP connection, then the packets going over PPP are encapsulated within PPP frames and then sent across or transmitted over to the destination dial-up or PPP server.
There are four distinct phases of negotiation in a PPP connection. Each of these four phases must complete successfully before the PPP connection is ready to transfer user data.
  • Phase 1: PPP Link Establishment First step is where PPP uses the LCP or Link Control Protocol to connect to the destination network. Apart from establishing the connection, LCP is also responsible for maintaining and terminating the connection. For example, during this phase 1, LCP connects to the destination and prepares the authentication protocol which will be used in phase 2. Next step would be to negotiate and find out if these two nodes in a PPP connection would agree on any compression or encryption algorithm. If the answer is yes then the same is implemented in Phase 4.
  • Phase 2: A User Authentication Second step is where the user credentials are sent to the remote destination for authentication. There are different secure authentication programs. The secure authentication method must be used to safeguard the user credentials. If using PAP (password Authentication Protocol) for authorizing user credential, the user information is passed in plain clear text that can be captured easily. This is the only time that the user must take utmost care in handling his/her credential from any theft. If for any reason an intruder captures these credentials, once the user connection is authenticated, the intruder will trap the communication, disconnect the original user, and take control of the connection.
  • Phase 3: PPP Callback Control The Microsoft implementation of PPP includes an optional callback control phase. This phase uses the Callback Control Protocol (CBCP) immediately after the authentication phase. If configured for callback, both the remote client and NAS disconnect after authentication. The NAS then calls the remote client back at a specified phone number. This provides an additional level of security to dial-up connections. The NAS allows connections from remote clients physically residing at specific phone numbers only. Callback is only used for dial-up connections, not for VPN connections.
  • Phase 4: Invoking Network Layer Protocol(s) Once the previous phases have been completed, PPP invokes the various network control protocols (NCPs) that were selected during the link establishment phase (Phase 1) to configure protocols that the remote client uses. For example, during this phase, IPCP is used to assign a dynamic address to the PPP client. In the Microsoft implementation of PPP, the Compression Control Protocol (CCP) is used to negotiate both data compression (using MPPC) and data encryption (using MPPE).

Data Transfer

Once the four phases of PPP negotiation have been completed, PPP begins to forward data to and from the two peers. Each transmitted data packet is wrapped in a PPP header that the receiving system removes. If data compression was selected in phase 1 and negotiated in phase 4, data is compressed before transmission. If data encryption is selected and negotiated, data is encrypted before transmission. If both encryption and compression are negotiated, the data is compressed first then encrypted.

Point-to-Point Tunneling Protocol (PPTP)

PPTP encapsulates PPP frames in IP datagram for transmission over an IP internetwork such as the Internet. PPTP can be used for remote access and router-to-router VPN connections.
PPTP or Point-to-Point tunneling protocol works over TCP ports, which are also used for tunnel management and GRE or Generic Routing Encapsulation protocol to encapsulate any PPP frames that will later be used to send data through the tunnel. Compression or encryption will depend on the tunnel configuration.
Point-to-Point Tunneling Protocol (PPTP), an extension of Point-to-Point Protocol (PPP), encapsulates PPP frames into IP datagrams to transmit data over an IP internetwork. To create and manage the tunnel, PPTP utilizes a TCP connection. A modified version of Generic Route Encapsulation (GRE) deals with data transfer by encapsulating PPP frames for tunneled data. The encapsulated tunnel data can be encrypted and/or compressed. However, PPTP encryption can only be utilized when the authentication protocol is EAP-TLS or MS-CHAP. This is due to PPTP using MPPE to encrypt VPN data in a PPTP VPN, and MPPE needing EAP-TLS or MS-CHAP generated encryption keys.
The authentication methods supported by PPTP are the same authentication mechanisms supported by PPP:
  • PAP
  • CHAP
  • MS-CHAP
  • EAP

Layer Two Tunneling Protocol (L2TP)

Layer 2 Tunneling Protocol (L2TP) is a combination of the benefits and features of PPTP and Cisco’s Layer 2 Forwarding (L2F) protocol. L2TP encapsulates PPP frames, and sends encapsulated data over IP, frame relay, ATM and X.25 networks. With L2TP, the PPP and layer two end-points can exist on different devices. L2TP can also operate as a tunneling protocol over the Internet. L2TP uses UDP packets and a number of L2TP messages for tunnel maintenance. UDP is used to send L2TP encapsulated PPP frames as tunneled data.
While L2TP can provide encryption and compression for encapsulated PPP frames, you have to use Microsoft’s implementation of L2TP with the IPSec security protocol. When L2TP is used with IPSec, the highest level of security is assured. This includes data confidentiality and integrity, data authentication, as well as replay protection. IPSec protects the packets of data and therefore provides security on insecure networks such as the Internet. This is due to IPSec securing the actual packets of data, and not the connection used to convey the data. IPSec utilizes encryption, digital signatures and hashing algorithms to secure data.
IPSec provides the following security features:
  • Authentication; digital signatures are used to authenticate the sender.
  • Data integrity; hash algorithms ensure that data has not been tampered with while in transit.
  • Data privacy; encryption ensures that data cannot be interpreted while in transit.
  • Repay protection; protects data by preventing unauthorized access by any attackers who resend data.
  • The Diffie-Hellman key agreement algorithm is used to generate keys. This makes it possible for confidential key agreement to occur.
  • Non-repudiation; public key digital signatures authenticate the origin of the message.
The two IPSec protocols are:
  • Authentication Header (AH); provides data authentication, data integrity and replay protection for data.
  • Encapsulating Security Payload (ESP); provides data authentication, data confidentiality and integrity, and replay protection.

Port 1723

Port 1723 is a network port that uses both TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) in order to transfer data from an application on one machine to an application on another machine. Port 1723 is rarely manually used and background services within the applications that access it usually manage it. This type of access between multiple applications is known as the PPTP (Point-to-Point Tunneling Protocol), which, by default, is usually run on Port 1723.
TCP Vs UDP
While Port 1723 runs both TCP and UDP protocols, each has different purposes. For example, TCP is used to transfer actual data and commands from one application to another and provides assurance that the data will be delivered and received in the same order in which it was sent. UDP, however, is used to transfer raw information in the form of datagrams and does not guarantee that the information will be delivered or received in the proper order. Instead, UDP requires that the application receiving any information sent over Port 1723 or any other port manages that information.
Applications
Port 1723 is mostly used for the PPTP and PPTP VPN (Virtual Private Networking) protocols. These protocols exchange information between multiple devices and applications. For example, if a user wishes to access a server from his/her personal computer in a separate location and a firewall protects that server, the user will need to enable a PPTP VPN between the two devices. Together, the TCP and PPTP VPN provide a secure connection between the two devices and guarantee data transfer. PPTP VPNs may also be used for gaming purposes in order for a user to connect to a private server that is outside his/her own network.
Advantages
Port 1723 is advantageous because it allows users to communicate via applications on multiple computers and networks. When used in conjunction with TCP, Port 1723 guarantees that data is sent and received correctly, while PPTP VPNs manage the security algorithms within the port to protect the user’s data from hackers and other cyber thieves.

Public DNS Servers

DNS (Domain Name System) servers are designed to allow networked devices such as computers, phones, and other servers to look up address records in DNS tables. The majority of DNS servers are configured to provide service to the organizations or people that own or pay service fees for the hardware. There are a number of public DNS servers that will provide DNS resolutions for requesting computers or people. The majority of these servers are purposely public; however, some become public due to misconfiguration or malicious behavior. These typically get fixed once management realizes that they have been providing free service to others.

How Does DNS Work?

The Domain Name System (DNS) is a database that handles translating a fully qualified domain name into an Internet Protocol (IP) address. Most computer networks will have at a minimum one DNS server to handle queries which are commonly referred to as the “name server.” It will store a listing of all of the IP addresses stored on the network as well as a cache of the IP addresses recently accessed outside of the network. On any given network, a computer only needs to know the location of one name server. When a computer goes to lookup an IP address that is not stored on the computer, it will check with the Name Server. The Name Server will see if it is addressed locally, but if someone on the network has recently requested the same address the IP address will be retrieved from the server’s cache.
Each of these cases results in little wait for a response. If the address has not been requested recently, then the Name Server will perform a search by querying two or more name servers. These queries can take anywhere from seconds to a minute based on the network speed. If no resolution is found, an error message is returned to the user.
ps logo2 Public DNS Servers

Public DNS Servers

The following are public DNS servers available for free use at the time of this writing. Before changing your personal or work computer DNS settings, ensure that you note the specifics for the legacy system you are changing in the event the free service has issues or is no longer available.
Google Public DNS
8.8.8.8
8.8.4.4
Level 3 Communications (Broomfield, CO, US)
4.2.2.1
4.2.2.2
4.2.2.3
4.2.2.4
4.2.2.5
4.2.2.6
Verizon (Reston, VA, US)
151.197.0.38
151.197.0.39
151.202.0.84
151.202.0.85
151.202.0.85
151.203.0.84
151.203.0.85
199.45.32.37
199.45.32.38
199.45.32.40
199.45.32.43
GTE (Irving, TX, US)
192.76.85.133
206.124.64.1
One Connect IP (Albuquerque, NM, US)
67.138.54.100
OpenDNS (San Francisco, CA, US)
208.67.222.222
208.67.220.220
Exetel (Sydney, AU)
220.233.167.31
VRx Network Services (New York, NY, US)
199.166.31.3
SpeakEasy (Seattle, WA, US)
66.93.87.2
216.231.41.2
216.254.95.2
64.81.45.2
64.81.111.2
64.81.127.2
64.81.79.2
64.81.159.2
66.92.64.2
66.92.224.2
66.92.159.2
64.81.79.2
64.81.159.2
64.81.127.2
64.81.45.2
216.27.175.2
66.92.159.2
66.93.87.2
Sprintlink (Overland Park, KS, US)
199.2.252.10
204.97.212.10
204.117.214.10
Cisco (San Jose, CA, US)
64.102.255.44
128.107.241.185
OpenNIC
202.83.95.227 (au)
119.31.230.42(au)
178.63.26.173 (de)
217.79.186.148 (de)
27.110.120.30(nz)
89.16.173.11 (uk)
69.164.208.50 (us)
216.87.84.211(us)
2001:470:8388:10:0:100:53:20 (us)
2001:470:1f10:c6::2 (us)
ClearCloud
Preferred DNS server: 74.118.212.1
Alternate DNS server: 74.118.212.2
Full list of available OpenNIC servers is here.

How to Change DNS Server Settings on Microsoft Windows

The DNS settings on a computer running the Microsoft Windows operating system (OS) are configured in the TCP/IP properties window for the computer. The following example to change DNS server settings is based on the steps required to change the settings on Microsoft Windows 7 OS. They may differ slightly based on the specific version of Windows installed on the computer.
Step 1 – Select the “Start” menu button and click the “Control Panel” icon.
Step 2 – Select the “Network and Internet,” “Network and Sharing Center,” and “Change Adapter” menu options.
Step 3 – Choose the network connection to configure to use the public DNS server. For an Ethernet connection you would right click the “Local Area Connection” menu button and then choose the “Properties” menu option. For a wireless connection, right click the “Wireless Network Connection” and choose the “Properties menu choice. Then, enter a password if prompted or confirm that you want to modify the setting.
Step 4 – Choose the “Networking” menu tab. Then select the “Internet Protocol Version 4 (TCP/IPv4) or Internet Protocol Version 6 (TCP/IPv6) menu option followed by clicking the “Properties” menu button.
Step 5 – Select the “Advanced’ menu option and then click the “DNS” menu tab. Note any DNS server IP addresses listed on this screen for future reference and clear from the window. Click the “Ok” menu button.
Step 6 – Chose the “Use the Following DNS Server Addresses” menu options. If you see any addresses listed here write them down. Then, enter the public DNS server addresses in the appropriate window. If you intend on using the Google Public DNS server your entries would be:
IPv4: 8.8.8.8 and/or 8.8.4.4.
IPv6: 2001:4860:4860::8888 and/or 2001:4860:4860::8844
Step 7 – Restart the network connection selected earlier for configuration. Then, repeat the steps for any additional network connections that require reconfiguration.

How to Change DNS Server Settings on Mac OS X

On Mac OS X the DNS server settings and set and changed in the “Network” window of the operating system. This example uses the specific instructions required for Mac OS X 10.5 and may slightly vary based on the specific version of the OS installed on your computer.
Step 1 – Select “System Preferences” followed by the “Network” menu options from the “Apple” menu.
Step 2 – If there is a lock icon located in the lower corner of the window preventing step 1 from being completed, then click the icon and enter the administrator password for the computer.
Step 3 – Choose the network connection to configure to use a public DNS server. For an Ethernet connection, choose the “Built-in-Ethernet” menu option followed by the “Advanced” menu choice. For a wireless connection, choose the “Airport” followed by clicking “Advanced.”
Step 4 – Choose the “DNS” menu tab. Then, select the “+” symbol to replace any of the listed addresses. Ensure legacy DNS servers are written down or otherwise recorded in the event you need to use them in the future. To change the DNS server settings to the Google servers enter the following addresses:
For IPv4: 8.8.8.8 and/or 8.8.4.4.
For IPv6: 2001:4860:4860::8888 and/or 2001:4860:4860::8844
Step 5 – Choose the “Apply” and “Ok” menu buttons to finish configuring the DNS server setup for your Mac computer. Repeat the instructions for an additional network connection on the same computer.

How to Change DNS Server Settings on Linux

In the majority of Linux distributions, the DNS settings are configured or set by using the Network Manager. This example uses settings required on Ubuntu and the steps may be different based on the Linux build or version that you have installed on your computer.
Step 1 – From the computer’s “System” menu, choose the “Preferences” and “Network Connections” menu options.
Step 2 – Choose the network connection that you want to change to use a public DNS server. For an Ethernet connection, you would select the “Wired” menu tab and then choose the network interface from the resulting list which is normally called “eh0.” For a wireless connection, choose the “Wireless” menu tab and select the appropriate wireless network.
Step 3 – Choose the “Edit” menu button and then choose the “IPv4” or “IPv6” settings menu tab. If you see that the current method being used is “Automatic (DHCP),” then open the dropdown and choose the “Automatic (DHCP) addresses only” menu option. If set to another option, do not change the selection.
Step 4 – In the DNS servers field, enter the desired public DNS server IP addresses separated by a space. To configure the computer to use the Google public DNS servers enter:
IPv4: 8.8.8.8 and/or 8.8.4.4.
IPv6: 2001:4860:4860::8888 and/or 2001:4860:4860::8844
Step 5 – Choose the “Apply” menu button to save the changes. Some builds of Linux will then as you to enter a password to confirm the changes. Repeat the same procedure for any additional connections that you want to change.

How to Change DNS Server Settings on Mobile Devices

On a mobile device, the DNS server configuration will normally be saved under the advanced wireless or WiFi settings. The following procedure is generic in nature and will likely require slightly different steps based on the brand of the device being changed.
Step 1 – Open the WiFi settings screen or menu. Locate the menu option or screen where DNS settings are listed.
Step 2 – Note any IP addresses listed for the primary and secondary DNS servers in the event you need to change the settings in the future to the original ones.
Step 3 – Change the DNS server addresses with the desired public servers. To change to use the public Google DNS servers, enter the following addresses:
IPv4: 8.8.8.8 and/or 8.8.4.4.
IPv6: 2001:4860:4860::8888 and/or 2001:4860:4860::8844
Sep 4 – Choose the “Save” and “Exit” menu options to complete changing the DNS server settings on your mobile device.

How Do You Test Public DNS Server Changes?

Step 1 – Launch the web browser on the computer or mobile device that has public DNS servers entered.
Step 2 – Enter a well-known website such as www.tech-faq.com or www.google.com.
Step 3 – If the page loads properly, bookmark it in your browser.
Step 4 – Access the page from the bookmark. If the well-known site loads from each test, then the changes to the public DNS server have worked appropriately.
Step 5 – If the webpage fails to load from either test enter a fixed IP address. A well-known one that can be used is: http://18.62.0.96/ which should resolve to MIT. If this works, bookmark the page and try again. If the IP address entry fails, you likely entered the DNS changes incorrectly and need to try again.
Step 6 – If neither of the IP address tests works, then enter the old DNS servers and run the tests again. If they fail, then there is a problem with the computer’s network connection that may require ISP or network administrator assistance. If the computer works normally after reverting to the old DNS settings, then there is likely an issue with the public DNS server that you have tried to use.

How Do You Troubleshoot DNS Server Errors?

In the event you are encountering errors or issues after changing to a public DNS server there are some troubleshooting steps that you can take to verify if the error is with the DNS server. Once you run each of the commands, save the results in a text document so that you can send them to the appropriate help desk or message board that supports the server (if there is one).
Step 1 – Confirm that your computer can establish communications with the public DNS server. On a Windows computer, open the DOS command prompt by selecting the “Start” menu button and entering “CMD” in the search text field.
Step 2 – enter “tracert –d serveraddress” followed by pressing the enter key. ON a MAC OS X computer, open the terminal and enter “/usr/sbin/traceroute –n –w 2 –q 2 –m 30 serveraddress . On Linux, sudo traceroute –n –w 2 –q 2 –m 30 serveraddress.

If you do not see the DNS server IP address as the final hop on the return trace or there are a lot of timeouts, then there may be a network connectivity issue preventing contact with the public DNS server.

Step 3 – Confirm that the public DNS server can resolve the hostname. On Windows, enter the following command at the command prompt:
Nslookup –debug hostname DNSserverAddress
Mac OS X and Linux:
Dig @DNSserverAddress hostname
If you see a section with an A record listed for the hostname on the output, then the DNS server can resolved the name and you should confirm the DNS settings on your computer again. If you do not see an answer for the hostname, then proceed to the next step.
Step 4 – Confirm that another public DNS server can resolve the hostname that you have selected. Enter the following commands at the command prompt on Windows. There servers used are from Level 3 and Open DNS (Last 2).
nslookup hostname 4.2.2.1
nslookup hostname 4.2.2.2
nslookup hostname 208.67.222.222
nslookup hostname 208.67.220.220
If you get a successful result, then there is likely an issue with the first public DNS servers that you tested. If you do not get a successful result, then there is probably an issue with the servers being tested and should be tried again after waiting for a bit.
Step 5 – Change your computer’s DNS settings to the original servers that were being used if you have no success in changing the settings to a public DNS server. Based on your operating system, you may need to manually enter these addresses again and restart the computer or device.

How to Delete a Virtual Machine Created with Windows Virtual PC

1. If you have the virtual machine that you want to delete running, then you must close or shut down the virtual machine first.

2. Open the Windows 7 Start Menu, and click on All Programs, expand Windows Virtual PC folder, and double click on the Virtual Machines shortcut. (See screenshot below)
NOTE: You can also open the Virtual Machines folder at C:\Users\(User Name)\Virtual Machines.
Windows Virtual PC - Delete a Virtual Machine-start_menu.jpg
3. Right click on the virtual machine that you want to delete, then click on Settings. (See screenshot below)
Windows Virtual PC - Delete a Virtual Machine-settings.jpg
4. In the left pane, click on the Hard Disk (ex: Hard Disk 1) that has a .VHD file (ex: Vista SP2.vhd) listed under "Current Value" to select it. (See screenshot below)
Windows Virtual PC - Delete a Virtual Machine-stepc.jpg
5. In the right pane, click on the Browse button. (See screenshot above)
NOTE: You could also just navigate to that location for the .vhd file in Windows Explorer.

6. Right click on the .vhd file (ex: Vista SP2.vhd) for the virtual machine that you want to delete, and click on Delete. (See screenshot below)
Windows Virtual PC - Delete a Virtual Machine-vhd.jpg
7. Click on Yes to approve. (See screenshot below)
Windows Virtual PC - Delete a Virtual Machine-stepd.jpg
8. Close the window under step 6 and under step 5.

9. Right click on the .vmcx file for the virtual machine that you want to delete, then click on Delete. (See screenshot below)
Windows Virtual PC - Delete a Virtual Machine-stepa.jpg
10. Click on Yes to approve. (See screenshot below)
Windows Virtual PC - Delete a Virtual Machine-stepb.jpg
11. Close the window under step 9.

12. Empty the Recycle Bin.

How to audit file and folder access to improve Windows 2000 Pro security

While auditing file and folder access on a client's home computer or a networked office machine is probably overkill, I recommend auditing any publicly accessible computer, whether it’s networked or not. Auditing file and folder access allows you to test your security policy and determine whether any users are trying to use the machine in an unauthorized manner.

Enabling auditing
Before you can audit file and folder access, you must enable the Audit Object Access setting in the machine’s group policy. Log on to the machine with a local administrative account and open the Control Panel. Double-click the Administrative Tools icon and then the Local Security Policy icon. Doing so will display the machine’s group policy settings.

Navigate through the console tree to Security Settings | Local Policies | Audit Policy. When you select the Audit Policy container, the column to the right will display a number of different events that you can audit, as shown in Figure A.

Figure A
You can audit a number of events.


As you can imagine, it’s easy to get carried away with the idea of an ultrasecure machine by auditing absolutely everything. But this is a bad idea for several reasons. First, the audit process builds log files. Each entry in the log consumes a small amount of hard disk space. If too many audited events occur, your machine could run out of hard disk space. Second, each audit also consumes a small amount of CPU time and memory. So excessive auditing can negatively affect system performance.

Perhaps the best reason for not auditing everything is information overload. I have seen situations in which several hundred events are audited every minute. This makes it virtually impossible to locate anything useful within the logs because the useful log entries blend in with the garbage entries. My advice is to use discretion when creating an audit policy. Don’t audit anything that you don’t absolutely need to know about. The more you refine which events are audited, the more meaningful each audited event will become.

Let’s take a look at some of the available auditing options. Obviously, which audits are appropriate for your needs will vary depending on your environment. For general purpose auditing, though, I recommend auditing logon events so that you can tell when users have logged on or off. I also recommend auditing object access (i.e., files and folders). Auditing object access will allow you to see who does what to designated files and folders. Finally, I recommend auditing policy changes. This is a big one, because if someone is tampering with the machine’s security policy, you really need to know about it.

To enable these types of auditing, double-click the appropriate option within the Local Security Policy Settings console. You will then see a dialog box similar to the one shown in Figure B. As you can see in this figure, you can implement a failure audit and/or a success audit for each event.

Figure B
You can perform success and/or failure audits for each event.


So how do you know whether to perform a success or a failure audit? Well, that’s really up to you. For logins and policy changes, I recommend auditing both success and failures. For example, a success audit of login actions would create an audit log entry every time someone logged in successfully. A failure audit of the same event would write an audit log entry every time someone entered a password incorrectly. Likewise, a success audit on policy changes would let you know that someone changed a security policy, while a failure audit would tell you that someone tried to change a security policy but didn’t actually manage to make the change happen.

When it comes to auditing object access, I recommend also enabling success and failure audits. Just because success and failure audits are enabled for object access, though, it doesn’t mean that you actually have to use them. Every object that you audit access for has an entire range of audit options. Enabling success and failure audits simply make these options available to you.

Auditing object access
You must be careful which objects you audit or you will end up with the information overload problems. It's very easy to end up with information overload because if you audit a folder, the audit applies to every object within the folder and within any subfolders. The audit applies to child objects, grandchild objects, and so on. So when possible, I recommend auditing objects at the file level. For example, if you needed to know who made the most recent changes to an Excel spreadsheet, it would be better to audit the actual XLS file than the folder containing it.

I also recommend that you avoid auditing system files and folders. Doing so can also result in information overload. For example, if you were to audit the Windows folder, you would end up with countless audit log entries because the system is constantly accessing files found in this folder. If you really wanted to audit Windows, a better solution might be to audit the registry files.

To audit a file or folder, right-click it and select the Properties command from the resulting menu. You’ll see the object’s Properties sheet. Select the Properties sheet’s Security tab, and click the Advanced button to display the Access Control Settings Properties sheet for the object. Select the Auditing tab. Then, click the Add button, and you’ll be presented with a list of users and groups. Select the users or groups that you wish to audit, and click OK.

For example, years ago, I worked for a large insurance company. At the company, a woman on the administrative staff was deliberately doing things to sabotage the system. Before we confronted her with this information, we needed to build a case against her. So we created audit policies that applied only to her. This way, we could watch every move she made without being flooded with thousands of log entries pertaining to other users.

Once you have selected a user or group, you’ll see the dialog box shown in Figure C. As you can see, you can enable success and/or failure audits for many types of access to the file or folder on a user or group basis.

Figure C
You can audit a number of different access types for files and folders.


Viewing audit results
You might be curious to know how to view the audit results. Open the Control Panel and double-click the Administrative Tools icon and then the Event Viewer icon. When the Event Viewer opens, click the Security container to see the security logs, as shown in Figure D. In the figure, you’ll notice how many log entries were applied in a matter of a few seconds. This is why it’s so important to use discretion when creating an audit policy. If you want to get more information on a particular event, simply double-click it.

Tuesday 10 April 2012

The Shutdown Event Tracker

Computer shutdowns can be sorted into either of the following categories:
  • Expected shutdowns: An expected shutdown can be defined as a computer shutdown which you predict to occur. Expected shutdowns usually occur when one of the following actions are performed:
  • Clicking Start, and then the Shutdown command
  • Holding down Ctrl + Alt + Del, and then clicking Shutdown
Expected shutdown can be categorized into:
  • Planned shutdowns: These are shutdowns which administrators have some form of control over
  • Unplanned shutdowns: These are shutdowns normally initiated by applications.
Unexpected shutdowns: Unexpected shutdowns result in the system shutting down without warning, or unexpectedly.
To enable services, programs, and files to close correctly, you should only turn off the computer when the operating system informs you that it is OK to shut down the server. This is extremely important because it ensures that all configuration settings and other important information are saved and written to disk.
Since administrators need to monitor when and why servers are restarted, Windows Server 2003 includes the following tools to control shutdown events:shutdown event tracker The Shutdown Event Tracker
  • Shutdown Event Tracker
  • Shutdown.exe
The Shutdown Event Tracker, a new Windows Server 2003 feature, is an uncomplicated GUI application that allows administrators to monitor shutdown events on the server. The tool is enabled on Windows Server 2003 by default. The Shutdown Event Tracker collects information on the reasons why the server was shut down, and then logs this information in Event Viewer. The command-line utility equivalent to the Shutdown Event Tracker is Shutdown.exe.
The Shutdown Event Tracker requires you to provide a reason whenever a server is shut down or restarted. When a server is expectedly shut down, a dialog box or page is displayed, requesting you to specify the reason for the server being shut down. When a server is unexpectedly shut down, the following user to log on to the server has to specify the reason for the server shutting down. Shutdown events can be viewed in Event Viewer, and can be useful when you need to improve uptime.

How to configure the Shutdown Event Tracker

  • Not Configured
  • Enabled
  • Disabled
  • Always: This option is self explanatory.
  • Server Only: When selected, the Shutdown Event Tracker is displayed for only Windows Server 2003 servers.
  • Workstation Only: When selected, the Shutdown Event Tracker is displayed for only Windows XP Professional workstations.
  1. Click Start, Run, and then enter gpedit.msc. Click OK.
  2. The Group Policy Object Editor console opens.
  3. In the left pane, expand Computer Configuration, and then Administrative Templates.
  4. Click System
  5. In the right pane, find and double-click the Display Shutdown Event Tracker.
  6. When the Display Shutdown Event Tracker Properties dialog box opens, select one of the following options:
  7. If you select the Enabled option, you can choose between the following options to specify when the Shutdown Event Tracker should be displayed:
  8. If you want to view help information on the Shutdown Event Tracker application, click the Explain tab.
  9. Click OK, and then close the Group Policy Object Editor console.

How the Shutdown Event Tracker works

  • Restart
  • Shut down
  • Log off the current user
  1. Enable the Display Shutdown Event Tracker policy so that the Shutdown Event Tracker is displayed.
  2. The Shut Down Windows dialog box is displayed when the server is shut down or restarted. The Shut Down Windows dialog box requires you to record information as to why the server was shut down.
  3. Using the options in the What do you want the computer to do drop-down list box, you can choose to perform the following tasks:
  4. Using the Options: drop-down list box, select the reason that best describes why the server was shut down or restarted.
  5. Next, either select or clear the Planned checkbox to indicate whether the shutdown was planned or unplanned.
  6. In the Comment box, enter any additional useful information.
  7. Click OK to close the Shut Down Windows dialog box

How to induce the Shutdown Event Tracker functionality on a remote computer

  1. To bring up the Remote Shutdown Dialog page on a remote computer, use the shutdown.exe command-line utility with /i.
  2. Select the appropriate option from the What do you want the computer to do drop-down list box.
  3. In the Shutdown Event Tracker group box, select an option which describes why the computer is being shut down, and click the Planned checkbox.
  4. Enter a comment in the Comment box.
  5. Click OK.

How to use the shutdown.exe command-line utility

The shutdown.exe command-line utility can be used to enter shutdown events using the command-line. The available options for the shutdown.exe command-line utility are listed below:
  • /s ServerName: /s shuts the server down, with ServerName detailing the name of the machine which should be shut down.
  • /r: Restarts the shutdown computer.
  • /t nnn: Specifies, in seconds, the time period for shutdown. The default is 30 seconds, and can be a value between 0 and 600.
  • /d [p:xx:yy]: /d describes the reason for the shutdown; p specifies that the shutdown is planned, xx and yy are for the major and minor reason numbers. The shutdown is regarded as being unplanned when p: is missing.
  • /p: Used with the /d switch, it indicates that the power of the machine is on.
  • /d[p:]xx:yy: xx (major) and yy (minor) indicate the major and minor reason numbers.
  • /m computername: Specifies the name of the target computer.
  • /a: Cancels the shutdown

How to use the Registry to configure registry entries for the Shutdown Event Tracker

You can use the Registry Editor to configure the Shutdown Event Tracker. Through configuring registry settings, you can enable or disable the Shutdown Event Tracker.
To configure registry settings for the Shutdown Event Tracker,
  1. Click Start, Run, enter regedit, and click OK.
  2. The Registry Editor console opens.
  3. Navigate to HKEY_LOCAL_MACHINE, Software, Microsoft, Windows, CurrentVersion, and then Reliability.
  4. Select ShutdownReasonUI. If ShutdownReasonUI does not exist, create a DWORD value, and then name it ShutdownReasonUI.
  5. Enter a data value of 1 in the Value data box to enable the Shutdown Event Tracker, or enter a data value of 0 in the Value data to disable the Shutdown Event Tracker.
  6. Click OK.
  7. Close the Registry Editor console.
  8. Restart the computer.

How to add custom reasons for the Shutdown Event Tracker

  • P, indicates a planned shutdown.
  • C, a comment is required.
  • B, a ID is required.
  • S, the expected shutdown event dialog box is displayed
  • D, the unexpected shutdown event dialog box is displayed
  1. Click Start, Run, regedit enter, and click OK.
  2. The Registry Editor console opens.
  3. Navigate to HKEY_LOCAL_MACHINE, Software, Microsoft, Windows, CurrentVersion, Reliability, and then UserDefined.
  4. Create a new string value using the available flags:
  5. You can add additional comments using the string registry value. The format for comments is:
  6. Click OK and then close the Registry Editor console.
  7. Restart the computer.

How to view shutdown events

Shutdown events can be viewed in Event Viewer. Event Viewer is used to monitor events that took place on a computer. Event Viewer stores events that are logged in a system log, application log, and security log. Because the system log conains events that are associated with the operating system, shutdown events are written to the system log.
To open Event Viewer,
  1. Select Start, Select Administrative Tools, and then select Event Viewer.
  2. Select the event log you want to view.
Event Viewer logs list five event types:
  • Information events tell you when a particular activity occurs, such as starting the system.
  • Warning events point out problems that could possibly occur.
  • Error events indicate an actual error that occurred.
  • Success Audit events indicates an event that has been audited for success
  • Success Failure events indicates an event that has been audited for failure
To view shutdown events
  1. Open Event Viewer
  2. Open the System log.
  3. Using the Event Source drop-down list, select USER32.
  4. To view the System log in a form, filtered to show only shutdown events and USER32 events, click OK.
  5. When the events are displayed, you can examine a detailed description of a particular shutdown event by double-clicking the particular event.
  6. The Event Properties dialog box is displayed.
  7. Click OK to close the dialog box.

How to disable the Shutdown Event Tracker

  1. Click Start, Run, and then enter gpedit.msc. Click OK.
  2. The Group Policy Object Editor console opens.
  3. In the left pane, expand Computer Configuration, and then Administrative Templates.
  4. Click System
  5. In the right pane, find and double-click the Display Shutdown Event Tracker.
  6. When the Display Shutdown Event Tracker Properties dialog box opens, click the Disabled option to disable the Shutdown Event Tracker.
  7. Click OK.
  8. Close the Group Policy Editor console.

How to Delegate Administrator Privileges in Active Directory

The primary reason to create organizational units is to distribute administrative tasks across the organization by delegating administrative control to other administrators. Delegation is especially important when a decentralized administrative model is developed. Delegation of administration is the process of decentralizing the responsibility for managing organizational units from a central administrator to other administrators. The ability to establish access to individual organizational units is an important security feature in Active Directory. Users can control access to the lowest level of an organization without having to create many active directory domains.
Authority delegated at the site level will likely span domains or conversely, may not include targets in the domain. Authority delegated at the domain level will affect all objects in the domain. Authority delegated at the organizational unit level can affect that object and all of its child objects or just the object itself.how to delegate administrator privileges in active directory How to Delegate Administrator Privileges in Active Directory
Delegation of control is the ability to assign the responsibility of managing Active Directory objects to another user, group, or organization. By delegating control, the need for multiple administrative accounts that have broad authority can be eliminated. Delegated administration in Active Directory helps ease the administrative burden of managing a network by distributing routine administrative tasks to multiple users. Basic delegated rights can be given to normal users, like create a user account or group account etc. and major domain-wide administration work can be delegated to senior/junior level administrator.
Autonomy is the ability of administrators in an organization to independently manage:
  • All or part of service management (called service autonomy).
  • All or part of the data in the active directory database or member computers that are joined to the directory (called autonomy).

Common Administrative Tasks

Administrators routinely perform the following tasks in active directory:
  • Change properties on a particular container. For example, when a new software package is available, administrators may create a group policy that controls software distribution.
  • Create and Delete objects of a specific type. In an organizational unit, specific types may include users, groups, and printers. When the new employee joins the organization, for example, a user account is created for the employee and then the employee is added to the appropriate organizational unit or group.
  • Update specific properties on specific object types. In an organizational unit, this is perhaps the most common administrative task performed. Updating properties include tasks such as resetting passwords and changing an employee’s personal information, such as his/her home address and phone number, when he/she moves.

Delegation of Administrative Control

Use the delegation of control wizard to delegate administrative control of active directory objects such as organizational units. By using the wizard, users can delegate common administrative tasks such as creating, deleting, and managing user accounts.
To delegate common administrative tasks for an organizational unit, perform the following steps:
  • Start the delegation of control wizard by performing the following steps:
    • Open Active Directory Users and Computers.
    • In the console tree, double click the domain node.
    • In the details menu, right click the organizational unit, click delegate control, and click next.
  • Select the users or group to which common administrative tasks will be delegated. To do so, perform the following steps:
    • On the Users or Groups page, click Add.
    • In the select Users, computers, or Groups, write the names of the users and groups to which control of the organizational unit has to be delegated, click OK and next.
  • Assign common tasks to delegate. To do so, perform the following common tasks:
    • On the tasks to delegate page, click delegate the following common tasks.
    • On the tasks to delegate page, select the tasks to be delegated and click OK.
  • Click Finish.

Customizing Delegated Administrative Control

In addition to using the delegation of control wizard to delegate a custom set of administrative tasks such as the creation, deletion, and management of user accounts, use the wizard to select a set of custom tasks and delegate control of only those tasks.
For example, users can delegate control of all existing objects in an organizational unit and any new objects that are added or select the objects in the organizational unit to delegate administrative control of, such as only user objects in an organizational unit. Users can also specify that they want to delegate only the creation of the selected objects, the deletion of the object, or both.
To delegate custom administrative tasks for an organizational unit, perform the following steps:
  • Start the Delegation of Control Wizard.
  • Select the users or groups to which administrative tasks will be delegated.
  • Assign the custom tasks to delegate. To do this, perform the following steps:
    • On the Tasks to Delegate page, click Create a custom task to delegate and click next.
    • On the Active Directory Object Type page, select one of the following tasks:
  • Click This folder, existing objects in this folder, creation of new objects in this folder, and click next.
  • Click Only the following objects in the folder, select the Active Directory object type that will delegate control, and click next.
    • Select the permissions to be delegated and click next.
  • Click Finish.

Replication Topology in Active Directory

Replication Topology is the route by which replication data travels throughout a network. Replication occurs between two domain controllers at a time. Over time, replication synchronizes information in Active Directory for an entire forest of domain controllers. To create a replication topology active directory must determine which domain controller's replicate data with other domain controllers.
The Knowledge Consistency Checker (KCC) is a built-in process that runs on each domain controller and regenerates the replication topology for all directory partitions that are contained on that domain controller. The KCC runs at specified intervals of every 15 minutes by default and designates replication routes between domain controllers that are most favorable connections that are available at the time.

How the KCC Works

To generate a replication topology automatically, the KCC evaluates information in the configuration partition on sites, the cost sending data between these sites (cost refers to the relative value of the replication paths), any existing connection objects, and the replication protocols that the KCC can domain controller's directory partitions to other domain controllers. If replication within the site becomes impossible or has a single point of failure, the KCC automatically established new connection objects between domain controllers to domain Active Directory replication.replication topology in active directory Replication Topology in Active Directory

Global Catalog and Replication of Partitions

A global catalog server is a domain controller that stores two forest-wide partitions-the schema and configuration partitions plus a read/write copy of the partition from its own domain and a partial replica of all domain partition in the forest. These partial replicas contain a read only subset of the information in each domain partition.

When you add a new domain to a forest, the configuration partition also adds the same information about the new domain. Active Directory replicates the configuration partition to all domain controllers, including global catalog servers, though normal forest-wide replication. Each global catalog server becomes a partial replica of the new domain by contacting a domain controller for that domain and obtaining the partial replica information. The configuration partners also provide the domain controllers a list of all global catalog servers in the forest.
Global Catalog servers register special DNS records in the DNS zone that corresponds to the forest not domain. These records, which are registered only in the forest root DNS zone, helps client and servers locate global catalog servers though out the forest.

Sites and Site Links

In Active Directory, sites help define the physical structure of a network. A set of TCP/IP subset address ranges defines a site, which in turn defines a group of domain controllers that have similar speed and cost. Sites consist of server objects, which contain connection objects that enable replication.

When you create additional sites, you must select at least one site link for each site, unless a site link is in place, connections cannot be made between computers at different sites, nor can replication occur between sites. Additional site links are not created automatically; you must use active directory sites and services to create them.
When you create the first domain in a forest, active directory creates a default site link named DEFAULTSITELINK. It indicates the first site and is located in the IP container in active directory. You can rename the site link.

To use sites to manage replication between sites, you create additional sites and subnets and delegate control of sites. Creating a site involves providing a name for the new site and associating the site with a site link. To create sites, you must log on as a member of the Enterprise Admin group or the Domain Admin group in the forest root domain.
A site link bridge creates a chain of site links that domain controllers from ifferent sites in the site links can use to communicate directly. Bridging is useful to constrain the KCC to particular paths in the site link topology. By default, site link bridging is enabled and all site links are considered transitive. That is, all site links for a given transport implicitly belong to a single site link bridge for that transport. So, in a fully routed IP network, it is not necessary to configure any site link bridges. If your IP network is not fully routed, you can disable site link bridging to urn off the transitive site link feature for the IP transport, and then configure site link bridges to model the actual routing behavior of your network.

The Bridgehead server is a domain controller that you designed to send and receive replicated data at each site. The bridgehead server from the originating site collects all of the replication changes and then sends them to the receiving site's bridgehead server, which replicates the changes to all domain controllers in the site.

Intersite Topology Generator

The inter site topology generator in an active directory process that defines the replication between the sites on a network. A single domain controller in each site I automatically designated to be the inter-site topology generator. Because this action is performed by the inter-site topology, you are not required to take any action to determine the replication topology and the bridgehead server roles.
The domain controller that holds the inter-site topology generator role performs two functions:
  • It automatically selects one or more domain controllers to become bridgehead servers. This way, if a bridgehead server becomes unavailable, it automatically selects another bridgehead server, if possible.
  • It runs the KCC to determine the replication topology and resultant connection objects that the bridgehead servers can use to use to communicate with bridgehead server of other sites.
To refresh replication topology, first determine whether you want to refresh the replication topology between sites or the replication topology within a site

Short note on Global Catalog

The global catalog is a distributed data repository that is stored in global catalog servers and issued via multimaster replication. It basically is composed of a representation (partial) of every object in the multidomain Active Directory forest that can also be searched. The global catalog is used because searches can be made faster because they don't need to go through the hassle of involving referrals to different domain controllers.
In addition, the global catalog allows finding an object that you wish without needing to know the object's domain name. This is possible because not only does it hold a full, writable domain directory replica, but it also has a partial, read-only replica of all the domain directory partitions in the forest. Therefore, by being composed of only the most used attributes during searching, all objects in every domain in any small or big forest can be found and represented in the database of one global catalog server.global catalog Global Catalog
To maintain the ability to conduct a full, fast, and effective search, the global catalog is constantly updated by the Active Directory replication system. These attributes that are replicated to the catalog are known as partial attribute set (PAS). The PAS, in a Windows 2000 Server environment will cause a full synchronization of the global catalog to occur even if it may be a minor change. However, this issue was improved upon in the Windows 2003 Server environment with a change in the PAS by only updating the attributes that change.

How Does It Work?

As an example, if a user decides to search for all printers within the forest, a global catalog server will process the request submitted by the user by searching through the global catalog, and then output the results. Had it not been for the global catalog server, the user would have had to have searched separately in every forest.
When a user tries to run a certain query (an example of an interactive domain logon), the domain controller will authenticate the user by first validating the user's identity and also all groups that the user is a part of. This is because the global catalog is the hold of all memberships to all groups, which means that this access to a global catalog server is necessary to accessing all forests, and thus is a requirement for Active Directory authentications. Therefore, it is best to have at least one global catalog server in one Active Directory site. This is because then, the authenticating domain controller does not need to transmit queries over a WAN connection to source information and process tasks.

Ports Commonly Used by Global Catalog Servers

Service Name UDP TCP
LDAP   3268 (global catalog)
LDAP   3269 (global catalog SSL)
LDAP 389 389
LDAP   636 (SSL)
RPC/REPL   135(endpoint mapper)
Kerberos 88 88(global catalog)
DNS 53 53
SMB over IP 445 445
Browser Name:
Browser Version:
Browser Code Name:
User-Agent: