Skip to content

Windows Server

Certification exams

Number Title
70-740 Installation, storage and Compute with Windows Server 2016
70-741 Networking with Windows Server 2016
70-742 Identity with Windows Server 2016

Find notes on labs here.

Installation

Windows Server 2016 installations are determined by the most suitable installation method, option, and edition.

  • Installation methods:
    • An upgrade is an installation performed in-place with existing data intact and is opposed to a clean installation.
    • A migration is a clean installation with old data transferred over. Migrations are facilitated by Powershell and [command prompt][SmigDeploy.exe] tools
  • Installation options include Desktop Experience, [Server Core][Server Core], and Nano Server.
  • The most important installation edition is Windows Server 2016 Datacenter edition, which is the only edition to have several important features that figure prominently in the exam.
  • Various other installation options exist, including: Windows Server 2016 Standard, Essentials, Multipoint Premium, Storage, and Hyper-V editions.

Server installations are also influenced by choice of activation model .

Licensing

Servicing channels provide a way of separating users into deployment groups for feature and quality updates.

  • Semi-Annual Channel - previously known as Current Branch for Business (CBB) - features updates twice a year. It is more appropriate for non-infrastructure workloads that can be deployed through automation.
  • Long Term Servicing Channel (LTSC) has a minimum servicing lifetime of 10 years and was designed to be used only for specialized devices such as those that control medical equipment or ATM machines, receiving new feature releases every 2-3 years

Server Core

Installing the Windows Server 2016 Server Core foregoes the possibility of later switching back to Desktop Experience, as had been possible in previous editions. Notably, WDS is incompatible with Server Core installations.

Server Core installations can be managed with a GUI with the use of MMC snap-ins. Because MMC is reliant on Distributed Component Object Model (DCOM) technology, firewall rules have to be enabled to allow DCOM traffic (ref. Set-NetFirewallRule).

Nano Server

Nano Server, a new installation option introduced in Windows Server 2016, provides a much smaller footprint and attack surface than even Server Core, but supports only some roles and features. Installation is done by building a VHD image via PowerShell on another computer. That VHD is then deployed as a VM or used as a boot drive for a physical server.

Booting a Nano Server VM produces a text-based interface called the Nano Server Recovery Console, a menu system that allows configuration of static network options (DHCP is enabled by default). The DNS server may not be configured interactively, but must be specified when building the image with the Ipv4Dns parameter.

If a Nano Server is domain-joined a remote Powershell session will authenticate via Kerberos. If not, its name or IP address must be added to the Trusted Hosts list.

The Windows Server 2016 installation media contains a NanoServer directory, from which the NanoServerImageGenerator Powershell module must be imported. It also contains a Packages subdirectory, with CAB files containing roles and features that correspond to named parameters or packages that are specified as values to the Packages named parameter when building a Nano Server image.

Cmdlet Description
Edit-NanoServerImage Add a role or feature to an existing Nano Server VHD file
New-NanoServerImage Used to create a Nano Server VHD file for Nano Server installation

Activation

Server installations are influenced by choice of activation model. MAK is suitable for small networks, but large enterprises may opt for KMS. - MAK activations are subdivided into Independent and Proxy, based on whether or not a VAMT is used. - KMS activations, which distribute GVLKs, are valid for a period of time and require the installation of a role and management tools. KMS operates on TCP port 1688. - AVMA simplifies the process of activating Hyper-V VMs running Windows Server 2012 or 2016.

Active Directory-based activation is an alternative for enterprises who opt to activate licenses through the existing AD DS domain infrastructure.

Any domain-joined computers running a supported OS with a GVLK will be activated automatically and transparently. The domain must be extended to the Windows Server 2012 R2 or higher schema level, and a KMS host key must be added using the VAMT. After Microsoft verifies the KMS host key, client computers are activated by receiving an activation object from the DC. MS Docs

Images

Many enterprises have begun virtualizing their server environments to take advantage of the many cost, reliability, and performance benefits that this change creates.

Migrations should start with systems that are peripheral to main business interests before moving on to those that are more vital. A carefully documented protocol should be developed to facilitate the conversion of physical hard disks to VHDs for use in Hyper-V guests. Supported guest OSes include Linux and FreeBSD.

The Microsoft Assessment and Planning (MAP) Toolkit is a free software tool that intelligently constructs a database of the hardware, software, and performance of computers on a network to plan for an operating system upgrade or virtualization. MAP supports the following discovery methods:

  • Active Directory Domain Services
  • Windows networking protocols
  • System Center Configuration Manager
  • IP address range scanning
  • Computer names entered manually or imported from a file

Server Core relies on the command-line for system maintenance, including updates which must be installed directly to the image using dism.exe or equivalent Powershell commands.

Containers

DNS

Installation

DNS server role requiremenets: - Statically assigned IP - Signed-in user must be member of local Administrators group

There are several recommended DNS deployment scenarios, all of which involve installing DNS on a Server Core or Nano Server instance. This is because these installation options offer a reduced attack surface, a reduced resource footprint, and reduced patching requirements. - DNS on DC: All DNS features are available and supports AD-integrated, primary, secondary, and stub zones. - DNS on RODC: Passes DNS zone updates to a writeable DC - DNS on standalone member server: Supports file-based primary, secondary, and stub zones but requiring zone replication because there is no integration over AD.

Nano Server

Installing DNS on a running Nano Server image requires running Install-NanoServerPackage as well as enabling the "DNS-Server-Full-Role" optional feature using Enable-WindowsOptionalFeature.

As of early 2017, Nano Server only supported a few roles, including DNS, but was only able to do so with some limitations - Nano Server can only support file-based DNS and cannot host AD-integrated zones. - Nano Server only supports the Semi-Annual servicing channel license. - Nano Server is not suitable for primary zones, only caching-only, forwarder, or secondary zone DNS servers

Zones

Zones can be considered one or more DNS domains or subdomains, associated with zone files, which compose the DNS database itself and contain two types of entries:

  • Parser commands, which provide shorthand ways to enter records: $ORIGIN, $INCLUDE, and $TTL
  • Resource records are whitespace delimited text files with columns for name, time to live, class, type, and data

The copies of zone files local to individual DNS servers can be primary (read/write) or secondary (read-only). A primary zone is a writable copy of a DNS zone that exists on a DNS server. A secondary zone is a read-only replica of a primary zone and necessitates the presence of a primary zone for the same zone. Defining a secondary zone via PowerShell requires specifying that zone's MasterServers.

In Windows Server, zone files can also be integrated with Active Directory, making what is called an Active Directory Integrated Zone. These allow multi-master zones, meaning any DC can process zone updates and the zone can be replicated to any DC in the domain or forest.

An AD-integrated zone can be specified by passing the ReplicationScope parameter to the Add-DnsServerPrimaryZone cmdlet.

Stub zones contains only name server (NS) records of another zone, but unlike a forwarder is able to update when name servers in a target zone change.

Reverse Lookup zones are used to resolve IP addresses to FQDNs. Reverse lookup zones for public IP address space are often administered by ISPs, and they are useful in spam filtering to double-check the source domain name with the IP address.

GlobalNames zones provide "single label name resolution" (as opposed to a FQDN) and are intended to replace WINS servers.

Query traffic

The process of resolving a query by querying other DNS servers is called recursion. Recursion can be disabled outright but Windows Server 2016 supports recursion scopes which will allow recursion to be disabled unless certain conditions are met (such as receiving the request on a particular interface).

There are two types of query in the context of recursion: - Recursive query sent by the petitioner: that is, the original query which begins recursion. - Iterative query: individual queries sent out to authoritative name servers in order to resolve a recursive query.

Root hints are preconfigured root servers that are necessary to begin the recursion process. The DNS Server service stores root hints in %systemroot%\System32\dns\CACHE.DNS. These can be edited through the GUI or by using the PowerShell commands Add-, Import-, Remove-, and Set-DnsServerRootHint.

Forwarding of a request occurs when a petitioned DNS server is unable to resolve the query because it is both: - Non-authoritative for the specified zone, and - Does not have the response cached.

Two actions are possible when forwarding: - Configure a DNS server only to respond to queries it can satisfy by referencing locally-stored zone information, forwarding all other requests. - Configure forwarding for specific zones through conditional forwarding

A secondary zone is not to be confused with delegation, where a DNS server delegates authority over part of its namespace (i.e. a subdomain) to one or more other servers.

Windows Server 2016 supports a DNS GlobalNames zone meant to supercede WINS, which served a role similar to DNS for the old NetBIOS naming standard. NetBIOS names use a nonhierarchical structure (i.e. are a single name and not divisible into sub-domains) based on a name up to 16 characters long (although the 16th character defines a particular service running on the host defined by the previous 15). An organization must share a single GlobalNames zone, which must be created in PowerShell manually.

Resource records

Zone scavenging allows servers with stale records to remove them. This feature is disabled by default, but can be set at the server or zone level.

Type Description
A IPv4 address record
AAAA IPv6 address record
CNAME Hostname or alias for hosts in the domain
MX Where mail for the domain should be delivered
NS Name servers
PTR Reverse lookup
SOA Each zone contains a single SOA record
SRV Generalized service location record, used for newer protocols instead of protocol-specific records
TXT Typically holds machine-readable data

Security

  • DNSSEC offers security features using public key certificates.
  • A socket pool can be used to configure the DNS server to use a random source port when issuing DNS queries.
  • Response rate limiting can pose a defense against DNS DoS attacks by ignoring potentially malicious, repetitive requests.
  • DNS-based Authentication of Named Entities (DANE) is supported by Windows Server 2016 to reduce man-in-the-middle attacks. DANE works by informing DNS clients requesting records from the domain which Certification Authoority they must expect digital certificates to be issued from.

Policies

Zone transfer policies can prevent or allow zone transfers to any server, to name servers, or to servers specified by FQDN or IP address. DNS Policy is a new feature in Windows Server 2016 that can control DNS server behavior depending on certain criteria.

These criteria include:

  • Client subnet
  • Recursion scope
  • Zone scope

DNSSEC

DNSSEC is a security setting for DNS that enables all DNS records in a zone to be digitally signed by a trust anchor which validates DNSKEY resource records. Root and top-level domain zones already have trust anchors configured and merely have to have it enabled.

To implement trust anchors: - A TrustAnchors zone must be created, which will store public keys associated with specific zones. A trust anchor from the secured zone must be created on every DNS server that hosts the zone. - A Name Resolution Policy Table (NRPT) GPO must be created (Windows Settings\Name Resolution Policy) This option can require DNSSEC based on computer name prefix or suffix, FQDN, or subnet. - DNSSEC key master is a special DNS server that generates and manages signing keys for DNSSEC protected zones. DANE allows you to publish certificate information within the DNS zone, rather than one of the thousands of trusted CAs. This protects against rogue/compromised CAs issuing illegitimate TLS certificates.

Two cryptographic keys: - Zone Signing Key (ZSK) signs zone data including individual resource records other than DNSKEY. It is also used to create the KSK. - Key Signing Key (KSK) is used to sign all DNSKEY records at the zone root.

DNSSEC record types: - RRSIG "resource record signature" each of which matches and provides a signature for an existing record in a zone - NSEC proves nonexistence of a record - NSEC3 NSEC replacement that prevents zone walking - NSEC3PARAM specifies the NSEC3 records included in response for DNS names that don't exist - DNSKEY stores public key used to verify a signature - DS delegation signer records secure delegations

DSC

IaaS management of servers is possible with Desired State Configuration (DSC), a feature of Windows PowerShell where script files stored on a central server can apply specific a specific configuration to nodes. These scripts are idempotent, meaning that they can be applied repeatedly without generating errors.

The DSC model is composed of phases 1. Authoring Phase, where MOF definitions are created 2. Staging Phase, where declarative MOFs are staged and a Configuration calculated per node 3. "Make It So" Phase, where declarative Configurations are implemented through imperative providers

Components of DSC scripts include: - Local Configuration Manager: engine running on the client system that received configurations from the DSC server and applies them to the target. - Node block specifies the names of target computers. - Resource block specifies settings or components and the values that the configuration script should assign to them.

DSC configurations can be deployed in two different refresh modes

Pull architecture: target LCM periodically retrieves configuration from a Pull Server, which consolidates MOF files. Push architecture: configuration is sent to target in response to explicit invocation of Start-DSCConfiguration on the server.

LCM has to be configured to accept Configurations of either refresh mode.

Tasks

Set LCM to push mode

[DSCLocalConfigurationManager()]
Configuration LCMConfig 
{
  Node localhost 
  {
    Settings 
    {
      RefreshMode = 'Push'
    }
  }
}

Install Telnet client

Configuration InstallTelnetLocal { 
  Import-DscResource -ModuleName 'PSDesiredStateConfiguration'
  Node localhost 
  {
    WindowsOptionalFeature InstallTelnet 
    {
      Name = "Telnet-Client"
      Ensure = "Present"
    }
  }
}

Install WSL

Configuration InstallWSLLocal { 
  Import-DscResource -ModuleName 'PSDesiredStateConfiguration'
  Node localhost   
  {
    WindowsOptionalFeature InstallWSL     
    {
      Name = "Microsoft-Windows-Subsystem-Linux"
      Ensure = "Present" 
    }
  }
}

Failover clusters

Failover clusters are composed of computers called nodes and can be created using New-Cluster. which typically possess a secondary network adapter, used for cluster communications.

Before Windows Server 2016, all cluster nodes had to belong to the same domain, but now this is but one of several possible cluster types called a single-domain cluster. A failover cluster can also be multi-domain, or workgroup, depending on how or if the servers are joined to domains. A cluster can also be detached from AD, even though its nodes are joined.

A cluster whose servers are joined to a single domain is typically associated with a cluster name object in Active Directory, which serves as its administrative access point. A workgroup cluster or a detached cluster need to have the cluster's network name registered in DNS as its administrative access point, which can be specified in Powershell with the AdministrativeAccessPoint named parameter. Additionally, on a workgroup cluster the same local administrator account must be created on every node, preferably the builtin Administrator account, although a different account can be configured if a particular Registry key is [created][New-ItemProperty] on each node.

Nodes that are domain-joined support CredSSP or Kerberos authentication, but workgroup nodes support NTLM authentication only.

Three types of witness resources can help to ensure a quorum takes place in clusters. This is necessary to prevent a split-brain situation, where communication failures between nodes cause separate segments of the clusters to continue operating independently of each other. A witness is created when a cluster has an even number of nodes, and only one can be configured. [pwsh][Set-ClusterQuorum] - Disk witness: dedicated disk in shared storage that contains a copy of the cluster database - File Share witness: SMB file share containing a Witness.log file with information about the cluster - Cloud witness: blob stored in Azure

Scale-out File Server (SoFS) is a clustered role providing highly available storage to cluster nodes. SoFS ensures continuous availability in the case of a node failure. Using SoFS, multiple nodes can also access the same block of storage at the same time, and for this reason is is an active/active or dual active system, as opposed to one where only one node provides accessible shares, or an active/passive system.

SoFS is specifically recommended for use on Hyper-V and SQL Server clusters and can be installed with Add-ClusterScaleOutFileServer.

SoFS shares are created with the [New-SmbShare][New-SmbShare] PowerShell cmdlet. SoFS shares are located on Cluster Shared Volumes (CSV), a shared disk containing an NTFS or ReFS volume that is made accessible for read and write operations by all nodes within a failover cluster.

CSVs solved a historical problem with using NTFS volumes with VMs in previous versions of Windows Server. NTFS is designed to be accessed by only one operating system instance at a time. In Windows Server 2008 and earlier, this meant that only one node could access a disk at a time, which had to be mounted and dismounted for every VM.

The solution was to create a pseudo-file system called CSVFS, sitting on top of NTFS, that enables multiple drives to modify a disk's content at the same time, but restricting access to the metada to the owner or coordinator. The coordinator node refers to the cluster node where NTFS for the clustered CSV disk is mounted, any other node is called a Data Server (DS).

VM resiliency can be configured by adjusting settings in response to changes in VM state: - Unmonitored: VM owning a role is not being monitored by the Cluster Service - Isolated: Node is not currently an active member of the cluster, but still possess the role - Quarantine: Node has been drained of its roles and removed from the cluster for a specified length of time.

Cluster Operating System Rolling Upgrade is a new feature that reduces downtime by making it possible for a cluster to have nodes running both Windows Server 2012 R2 and Window Server 2016. Using this feature, nodes can be brought down for an upgrade.

When [Storage Spaces][Storage Spaces] is combined with a failover cluster, the solution is known as Clustered Storage Spaces.

High availability

Hyper-V Replica allows simple failover to occur between Hyper-V hosts, without the need for a cluster. To configure a simple one-way failover solution using Hyper-V Replica, configure the destination VM as a replica server, either in Hyper-V Manager or PowerShell. [Set-VMReplicationServer][Set-VMReplicationServer] lab The destination host must also have firewall ports opened corresponding to the authentication method chosen. The source VM, which is to be replicated, must have its options configured through the Enable Replication wizard. [Enable-VMReplication][Enable-VMReplication] To use Hyper-V Replica as a (two-way) failover solution, configure both VMs as replica servers.

Migrations can take place one of three methods: - Live Migration moves only the system state and live memory contents, not data files. Live migration requires that the hosts be, if not clustered, at least part of the same (or a trusted) domain. Live Migration requires that VHD files be placed on shared storage and both hosts have appropriate permissions to access said storage. An unpopulated VM is created on the destination with the same resources as the source before transferring memory pages. Once the servers have an identical memory state, the source VM is suspended and the destination takes over. Hyper-V notifies the network switch of the change, diverting network traffic to the destination. Authentication can be made by [CredSSP][CredSSP] or Kerberos.

When a Hyper-V cluster is created, the Failover Cluster Manager launches the High Availability Wizard, which configures the VM to support Live Migration. The same thing can be done with the PowerShell cmdlet Add-ClusterVirtualMachineRole.

Additionally, using Kerberos authentication for live migration requires constrained delegation, which enables a server to act on behalf of a user for only certain defined services. This must be configured within Active Directory Users and Computers, by opening the Properties of the source Computer object, and changing the setting under the Delegation tab. - An additional, outdated method of migration is quick migration, which was present in Windows Server prior to the introduction of live migration and persists in Windows Server 2016 for backward compatibility. A quick migration involves pausing the VM, saving its state, moving the VM to the new owner, and starting it again. A quick migration always involves a short period of VM downtime.

  • Shared Nothing Live Migration requires that source and destination VMs be members of the same (or trusted) domain, and source and domain servers must be running the same processor family (Intel or AMD) and linked by an Ethernet network running a minimum of 1 Gbps. Additionally, both Hyper-V hosts must be running idential virtual switches that use the same name; otherwise the migration process will be interrupted to prompt the operator to select a switch on the destination server. The process of migrating is almost identical to a Live Migration, except that you select the "Move the Virtual Machine's Data To a Single Location" option on the Choose Move Options page of the Move Wizard.
  • Storage Migration works by first creating new VHDs on the destination corresponding to those on the source server. While the source server continues to operate using local files, Hyper-V begins mirroring disk writes to the destination server and begins a single-pass copy of the source disks to the destination begins, skipping blocks that have already been copied. Once the copy has completed, the VM begins working from the destination server and the source files are deleted. For a VM that is shut off, storage migration is equivalent to simply copying files from source to destination.

Site-aware clusters have failover affinity. Node fairnes evalutes memory and CPU loads on cluster nodes over time.

Cluster management

VM Monitoring allows specific services to be restarted or failed-over when a problem occurs. To use VM Monitoring: - The guest must be joined to the same domain as the host - The host administrator must be a member of the guest's local Administrators group - And Windows Firewall rules in the Virtual Machine Monitoring group must be enabled.

The service can then be monitored using [Add-ClusterVMMonitoredItem][Add-ClusterVMMonitoredItem].

Migration

VMs can be moved from node to node of a cluster using live, storage, or quick migrations.

VM network health protection is a feature (enabled by default) that detects whether a VM on a cluster node has a functional connection to a designated network. If not, the cluster live migrates the VM role to another node that does have such a connection. This setting can be controlled in Hyper-V Manager > VM Settings > Advanced Features > Protected network

GPO

Group Policy Objects (GPO) facilitate the uniform administration of large numbers of users and computers. GPOs can be local or domain-based.

Local GPOs come in several varieties, applied in the following order (last takes highest precedence): - Local Group Policy applied to computers - Administrators and Non-Administrators Local Group Policy applied to users based on their membership in local Administrators group. - User-specific Local Group Policy:

Domain-based GPOs consist of two components a [container][Group Policy container] and a [template][Group Policy template]. These are stored in different locations and replicated by different means. - Containers define the fundamental attributes of a GPO, each of which is assigned a GUID, and are stored in the AD DS database and replicated to other domain controllers using intrasite or intersite AD DS replication schedule. - Templates, a collection of files and folders that define the actual GPO settings, are stored in the SYSVOL shared folder (%SystemRoot%\SYSVOL\Domain\Poligicies\{GUID}) on all DCs. SYSVOL replication is handled by the DFS Replication Agent since Windows Server 2008.

A GPO consists of 2 top-level nodes: - Computer Configuration contains settings that are applied to computer objects to which the GPO is linked - User Configuration containers user-related settings, applied when a user signs in and thereafter and automatically refreshed every 90-120 minutes

Beneath each of these nodes are folders that group settings - Software Settings - Windows Settings allows basic configuration for computers or users - Administrative Templates contains Registry settings that control user, computer, and app behavior and settings, grouped logically into folders

Although domain controllers store and serve GPOs, the client computer itself must request and apply the GPOs using the Group Policy Client service. Client-side extensions process the GPOs once downloaded

Starter GPOs are intended for use in large organizations with a proliferation of GPOs that share settings. Starter GPOs can be imported from, and exported to, a .CAB file.

Once a GPO is created it must be linked to a container object in AD DS for it to apply to objects, a process known as scoping. GPOs can be linked to Sites, Domains, and OUs. If multiple GPOs are linked to the same container, the link order must be configured.

There are 2 default GPOs in an AD DS domain, which can be reset using arguments to the dcgpofix command. - Default Domain Policy, linked to the domain object - Default Domain Controllers Policy, linked to the Domain Controllers OU

Although it is possible to link the same GPO to multiple containers, it is recommended to import (i.e. copy) a GPO from another domain. This process effecitvely restores the settings of another GPO into a newly created GPO, which is then linked to another container.

Hyper-V

[Hyper-V][Hyper-V] is a Type I hypervisor and role that allows a Windows Server 2016 host to create VMs, called guests. In Type I virtualization, the hypervisor forms an abstraction layer that interacts directly with the host hardware. In this model, the individual environments created by the hypervisor, including the host operating system and guest VMs, are called [partitions][partition].

Hyper-V Server, a free product available for download is limited to the command-line Server Core interface, however it does include Sconfig to aid configuration.

Hyper-V can be managed remotely using the GUI (Hyper-V Manager, hyper-v-tools), or Powershell (hyper-v-powershell). Authentication can be via Kerberos or Credential Security Support Provider (CredSSP), which must be enabled on both server and client.

PowerShell remoting - Explicit remoting involves opening a PowerShell session to a remote session - Implicit remoting involves running a cmdlet specifying the ComputerName parameter. PowerShell Direct allows easy remoting to VMs by using the -VmName Powershell parameter using a PowerShell session.

[Nested virtualization][Nested virtualization] is a new capability where a virtual host running Windows Server 2016 on a physical host also running Windows Server 2016 can host nested VMs.

Host configuration

[Hyper-V][Hyper-V] is a Type I hypervisor and role that allows a Windows Server 2016 host to create VMs, called guests. In Type I virtualization, the hypervisor forms an abstraction layer that interacts directly with the host hardware. In this model, the individual environments created by the hypervisor, including the host operating system and guest VMs, are called [partitions][partition].

Hyper-V Server, a free product available for download is limited to the command-line Server Core interface, however it does include Sconfig to aid configuration.

Hyper-V can be managed remotely using the GUI (Hyper-V Manager, hyper-v-tools), or Powershell (hyper-v-powershell). Authentication can be via Kerberos or Credential Security Support Provider (CredSSP), which must be enabled on both server and client.

PowerShell remoting - Explicit remoting involves opening a PowerShell session to a remote session - Implicit remoting involves running a cmdlet specifying the ComputerName parameter. PowerShell Direct allows easy remoting to VMs by using the -VmName Powershell parameter using a PowerShell session.

[Nested virtualization][Nested virtualization] is a new capability where a virtual host running Windows Server 2016 on a physical host also running Windows Server 2016 can host nested VMs.

Networking

Virtual switches can be external, internal, or private (in order of decreasing access). Up to 8 network adapters can be [added][Add-VMNetworkAdapter] to a Windows Server 2016 Hyper-V VM.

Hyper-V maintains a pool of MAC addresses which are assigned to virtual network adapters as they are created. Hyper-V MAC addresses begin with 00-15-5D, followed by the last two bytes of the IP address assigned to the server's physical network adapter (i.e. last two octets), then a final byte.

Generation 1 VMs supported synthetic and legacy virtual network adapters, but in Generation 2 VMs only synthetic adapters are used. Generation 1 VMs can only boot from network (PXE) when using a legacy adapter.

Physical hosts running Windows Server 2016 can support teams of up to 32 NICs, but Hyper-V VMs are limited to teams of two. The team must first be configured in the host operating system and appears as a single interface in the Virtual Switch Manager. High-performance embedded teaming, reliant on RDMA, can only be configured with Powershell. - Teaming Mode - Switch Independent: switch is unaware of presence of NIC team and does not load balance to members; Windows is performing the teaming - Switch Dependent: switch determines how to distribute inbound network traffic; only supported by specialty hardware - Static Teaming: switch and host are manually configured (typically supported by server-class switches) - Link Aggregation Control Protocol (LACP): dynamically identifies links that are connected between the host and the switch - Load Balancing Mode - Address Hash: a hash is created based on address components of the packet, which is used to reasonably balance adapters - Hyper-V Port: NIC teams configured on Hyper-V hosts give VMs independent MAC addresses - Dynamic: outbound loads are distributed based on a hash of the TCP ports and IP addresses

Virtual machine queuing will enhance performance if a physical host supports it and it is enabled.

Bandwidth management is achieved by setting limits on the virtual network adapter, in the GUI or in [Powershell][Set-VMNetworkAdapter].

Storage

The New Virtual Machine Wizard presents different options for Generation 1 vs. Generation 2 VMs. - Generation 1 VMs provide two IDE controllers, which host the hard drive and a DVD drive, and an unpopulated SCSI controller which can host additional disks. - Generation 2 VMs, however, have only a single SCSI controller, which hosts all virtual drives.

A new VHD can be created using - Hyper-V Manager through the New Virtual Hard Disk Wizard - Disk Management (diskmgmt.msc), however the option to create a differencing disk is not available, nor can specific block or sector size be specified - PowerShell

Shared virtual disk files are preferably created as VHD sets. Pass-through disks make exclusive use of a physical disk. pwsh

Standard checkpoints (previously known as "snapshots" in Windows Server 2012 and before) with the extensions AVHD or AVHDX save the state, data, and hardware configuration of a VM. They are recommended for development and testing but are not a replacement for backup software nor recommended for production environmentsj, because restoring them in a production environment will interrupt running services. Production checkpoints do not save memory state, but use Volume Shadow Copy Service (Windows) or File System Freeze (Linux) inside the guest to create "point in time" images of the VM.

Shielded VMs

Shielded VMs are a feature exclusive to the Datacenter Edition of Windows Server 2016.

As a result of increased virtualization, physical servers that were once secured physically were migrated to Hyper-V hosts that are less secure because they are accessible to fabric administrators. Shielded VMs were introduced to protect tenant workloads from inspection, theft, and tampering as a result of being run on potentially compromised hosts.

A security concept closely associated to shielded VMs is the guarded fabric, which is a collection of nodes cooperating to protect shielded Hyper-V guests. The guarded fabric consists of: - Host Guardian Service (HGS) utilizes remote attestation to confirm that a node is trusted; if so, it releases a key enabling the shielded VM to be started. HGS is typically a cluster of 3 nodes. - Guarded hosts: Windows Server 2016 Datacenter edition Hyper-V hosts that can run shielded VMs only if they can prove they are running in a known, trusted state to the Host Guardian Service. - Shielded VMs

In a production environment, a fabric manager like Virtual Machine Manager would be used to deploy shielded VMs (which are signified by a shield icon).

Shielded VMs must run Windows (8+) or Windows Server (2012+), although Linux shielded VMs are now also supported since version Windows Server version 1709.

Shielded VMs are produced by a three-stage process (VHD -> Shielded template -> Shielded VMs) 1. Preparation: Install and configure an OS onto a virtual disk file 2. Templatization: Convert virtual disk file into a shielded template 3. Provisioning: Create one or more shielded VMs from the shielded template

Configure HGS in its own new forest YouTube

Install-WindowsFeature HostGuardianServiceRole -Restart
Install-HgsServer -HgsDomainName 'savtechhgs.net' -SafeModeAdministratorPassword $adminPassword -Restart

Shielding Data is created and owned by tenant VM owners and contains secrets needed to create shielded VMs that must be protected from the fabric admin.

Resources:

Attestation

There are two modes of attestation supported by HGS:

  • Hardware-trusted attestationHardware-trusted attestation mode requires:
    • Measured boot: TPMv2 to seal software and hardware configuration details measured at boot
    • Code integrity enforcement to strictly define permissible software
    • Platform Identity Verification: Active Directory is not sufficient to identify the host. Rather, an identity key rooted in the host TPM is used for identity.
  • Remote attestation based on asymmetric key pairs
  • Admin-trusted attestation was previously based on guarded host membership in a designated AD DS security group, but is deprecated beginning with Windows Server 2019.
    • Host identity is [verified]](https://youtu.be/B2vFrdXd5jg?t=525) by checking security group permission
    • No Measured Boot or Code Integrity Validation
    • Intended to aid transition to Hardware-trusted attestation mode for hosts produced before TPMv2

VM configuration

VMs are associated with a variety of file types:

Extension Description
.vmc XML-format VM configuration
.vhd, .vhdx Virtual hard disks
.vsv Saved-state files

VMs can be created in Hyper-V, and a machine's RAM can even be changed dynamically.

Hyper-V guests can take advantage of a suite of features to enhance performance and functionality. - Virtualization of NUMA architecture - Smart paging for when VMs that use dynamic memory restart and temporarily need more memory than is available on the host, for example at boot - Monitoring resource usage, to minimize cost overruns when guests run in the cloud - Disk and GPU passthrough, and other PCI-x devices, with DDA pwsh - Increased performance of interactive sessions that use [VMConnect][VMConnect.exe]

Microsoft supports some Linux distributions, like Ubuntu, with built-in Linux Integration Services, which improve performance by providing custom drivers to interface with Hyper-V. Some distributions like CentOS and Oracle come with integrated LIS packages, but free LIS packages provided by Microsoft for download from the Microsoft Download Center support additional features and come with the additional benefit of being versioned. These packages are provided as tarballs or ISO images, and must be loaded directly into the running guest operating system. FreeBSD has included full support for FreeBSD Integration Services (BIS) since version 10.

Secure Boot has to be disabled when loading Hyper-V VMs running Linux distributions, since UEFI does not have certificates for non-Windows operating systems by default. Some distributions supported by Microsoft do have certificates in the UEFI Certificate Authority.

Different versions of Hyper-V create VMs associated with that version (Windows Server 2016 uses Hyper-V 8.0). VMs created by older versions of Hyper-V can be [updated][Update-VMVersion], but once updated they may no longer run on a host of a previous version.

Importing an exported VM can be done in three ways: - Register: exported files are left as-is and the guest's ID is maintained; - Restore: exported files copied to the host's default locations or ones that are otherwise specified; ID is maintained - Copy: exported files are copied; new ID generated

PXE boot is supported in two scenarios: - Generation 1 VMs with a legacy virtual network adapter support PXE boot (but not synthetic). Generation 1 VMs are limited to 2 TB in size and do not support many of the advanced features that Generation 2 VMs do. But PXE Boot remains one of the primary reasons to continue using a Generation 1 VM. - Generation 2 VMs with a synthetic network adapter also support PXE boot. would also support bandwidth management and VMQ.

Generation 2 VMs also do not support 32-bit OSes, including: - Windows Server 2008, R2 - Windows 7 - Older Linux distros - FreeBSD (all)

VMs cannot be upgraded from Generation 1 to Generation 2 easily, although a script named Convert-VMGeneration was once provided by Microsoft and can still be found. But the VM's version, referring to the version of Hyper-V used to create it, can be upgraded with Upgrade-VMVersion .

Monitoring

Performance Monitor is a program that allows realtime monitoring of hundreds of different system performance statistics, called performance counters. Counters can be viewed in several ways, including line graph, histogram bar graph, and report views. Every counter added to a graph is associated with a computer, a performance object (hardware or software component to be monitored), a performance counter (statistic), and an instance.

A data collector set captures counter statistics for later review. A single data collector set can gather performance counter data from multiple VMs. Event trace data cannot be combined with performance data in the same data collector set. Expiration dates can be set for data collector sets, but if actively collecting data the expiration date will not stop collection.

A performance alert is a type of data collector set that can track system performance and log events in the application event log.

Alerts can be triggered when a performance counter value exceeds a certain threshold. Only members of the local groups Administrators and Performance Log Users can create alerts, but the Log on as a batch user right must be granted to members of Performance Log Users.

A hard fault occurs when data is swapped between memory and disk.

Performance counters

Counter Acceptable values
Processor: % Processor Time <85%
Processor: Interrupts/Sec cf. baseline
System: Processor Queue Length <2
Server Work Queues: Queue Length <4
Memory: Page Faults/Sec <5
Memory: Pages/Sec <20
Memory: Available MBytes >5% of physical memory
Memory: Committed Bytes < physical memory
Memory: Pool Non-Paged Bytes Stable
PhysicalDisk: Disk Bytes/Sec cf. baseline
PhysicalDisk: Avg. Disk Bytes/Transfer cf. baseline
PhysicalDisk: Current Disk Queue Length <2 per spindle
PhysicalDisk: % Disk Time <90%
LogicalDisk: % Free Space >20%
Network Interface: Bytes Total/Sec cf. baseline
Network Interface: Output Queue Length <2
Server: Bytes Total/Sec 50% of total bandwidth

Network Load Balancing

Cluster VMs can be configured to drain their workloads to other nodes when being shutdown using Suspend-ClusterNode

NLB Clusters are made of hosts, while Failover Clusters are made of nodes.

NLB port rules control how the cluster functions and are defined by two operational parameters:

  • Affinity: associate client requests to cluster hosts. When no affinity is specified, all network requests are load-balanced across the cluster without regard to their source.
  • Filtering mode: specify how the cluster handles traffic described by port range and protocols; can be single or multiple hosts.

When a port rule is not configured, the default host will receive all network traffic.

Windows Server NLB Clusters can be upgraded to Windows Server 2016 in two ways: - Rolling upgrade brings only a single host down at a time, upgrading it before adding it and proceeding to the next one - Simultaneous upgrade brings the entire NLB cluster goes down

NLB clusters have a Cluster Operation Mode setting specifying what kind of TCP/IP traffic the cluster hosts should use - Unicast: NLB replaces the MAC address on the interface with the cluster's virtual MAC address, causing traffic to go to all hosts. Cluster hosts are prevented from communicating with each other in this mode. In this case, a second network adapter must be installed in order to facilitate normal communication between NLB cluster hosts. - Multicast: NLB adds a multicast MAC address to the network interface on each host that does not replace the original.

Storage

Every track of a hard drive platter is split into disk sectors, traditionally 512 bytes. A block is commonly called an "allocation unit" in Windows, but also commonly called a cluster. Storage left over unused in partially unused blocks is known as slack space.

A new disk must first be initialized, that is, a partition table style must be chosen: - GPT: 128 partitions per disk, maximum volume size of 18 exabytes (260 bytes). Booting from a GPT drive is not possible unless the computer architecture supports EFI-based boot partitions. - MBR: older format that is commonly used for removable media, supporting volumes up to 2 TB with up to 4 primary partitions, although a common workaround is to make one of these partitions an extended partition, which can be be further subdivided into logical drives

Mounting a partition as a single filesystem produces a volume, although the distinction can often be lost. The exception would be a case where a volume spans multiple partitions or physical disks, as is possible with software RAID.

Virtual hard disks can be created with [Powershell][New-VHD] or in diskmgmt.msc and come in two formats: - VHD - VHDX

Only 2 filesystem options are available for modern servers: - NTFS supports volumes up to 16 TB with the default 4 KB allocation unit size (but 256 TB with the 64 KB allocation unit size) and is required by some Windows Server services like AD DS, File Replication Service, Volume Shadow Copy Service, and Distributed File System - ReFS uses the same system of permissions as NTFS and offers error checking and repair capabilities that NTFS does not, but it does not support NTFS features like file compression, Encrypted File System, and disk quotas. ReFS supports a maximum file size of 16 exabytes and volumes up to 1 yobibyte (280 bytes)

Software RAID can be implemented by creating Spanned, Striped, or RAID-5 volumes in diskmgmt.msc. A more modern and preferred technique is to create storage pools in [Storage Spaces][Storage Spaces].

Dedup

Data deduplication ("dedup") is a role service that conserves storage space by storing only one copy of redundant chunks of files. Data duplication is appropriate to specific workloads, like backup volumes and file servers. It is not appropriate for database storage or operating system data or boot volumes.

Data deduplication had required NTFS, although ReFS is supported since 1709.

Data deduplication runs as a low-priority background process when the system is idle, by default; however its behavior can be configured based on its intended usage. Deduplication works by scanning files, and breaking them into unique chunks of various sizes that are collected in a chunk store. The original locations of chunks are replaced by reparse points. When a file is recently written, it is written in the standard, unoptimized form; the accumulation of such files is known as churn. Other jobs associated with deduplication include garbage collection, integrity scrubbing, and (when disabling deduplication) unoptimization.

There are several deployment scenarios considered for data deduplication: - General purpose file servers Users often store multiple copies of the same, or similar, documents and files. Up to 30-50% of this space can be reclaimed using deduplication. - Virtualized Desktop Infrastructre (VDI) deployments Virtual hard disks that are used for remote desktops are essentially identical. Data Deduplication can also amelioriate the drop in storage performance when many users simultaneously log in at the start of the day, called a VDI boot storm. - Backup snapshots are an ideal deployment scenario because of the data is so duplicative.

Deduplication is especially useful for disk drive backups, since snapshots typically differ little from each other.

File shares

Windows Server 2016 supports file shares via two protocols, both of which require the fs-fileserver role service: - SMB, long the standard for Windows networks - NFS, typically used in Linux, requires the installation of fs-nfs-service role service

BranchCache enables client computers at remote locations to cache files accessed from shares, so that other computers at the same location can access them. Install the FS-BranchCache feature and enable the File and Printer Sharing and Branchcache - Hosted Cache Server (uses HTTPS) firewall display groups.

Media

Every track of a hard drive platter is split into disk sectors, traditionally 512 bytes. A block is commonly called an "allocation unit" in Windows, but also commonly called a cluster. Storage left over unused in partially unused blocks is known as slack space.

A new disk must first be initialized, that is, a partition table style must be chosen: - GPT: 128 partitions per disk, maximum volume size of 18 exabytes (260 bytes). Booting from a GPT drive is not possible unless the computer architecture supports EFI-based boot partitions. - MBR: older format that is commonly used for removable media, supporting volumes up to 2 TB with up to 4 primary partitions, although a common workaround is to make one of these partitions an extended partition, which can be be further subdivided into logical drives

Mounting a partition as a single filesystem produces a volume, although the distinction can often be lost. The exception would be a case where a volume spans multiple partitions or physical disks, as is possible with software RAID.

Virtual hard disks can be created with [Powershell][New-VHD] or in diskmgmt.msc and come in two formats: - VHD - VHDX

Only 2 filesystem options are available for modern servers: - NTFS supports volumes up to 16 TB with the default 4 KB allocation unit size (but 256 TB with the 64 KB allocation unit size) and is required by some Windows Server services like AD DS, File Replication Service, Volume Shadow Copy Service, and Distributed File System - ReFS uses the same system of permissions as NTFS and offers error checking and repair capabilities that NTFS does not, but it does not support NTFS features like file compression, Encrypted File System, and disk quotas. ReFS supports a maximum file size of 16 exabytes and volumes up to 1 yobibyte (280 bytes)

Software RAID can be implemented by creating Spanned, Striped, or RAID-5 volumes in diskmgmt.msc. A more modern and preferred technique is to create storage pools in [Storage Spaces][Storage Spaces].

S2D

Although a cluster can normally be created in the GUI Failover Cluster Manager, in order to use Storage Spaces Direct the system must be prevented from automatically creating storage, which necessitates creation in PowerShell with the NoStorage switch parameter, and then S2D must be enabled using Enable-ClusterStorageSpacesDirect. This command scans all cluster nodes for local, unpartitioned disks, which are added to a single storage pool and classified by media type in order to use the fastest disks for caching.

The recommended drive configuration for a node in an S2D cluster is a minimum of six drives, with at least 2 SSDs and at least 4 HDDs, with no RAID or other intelligence that cannot be disabled.

Caching is configured automatically, depending on the combination of drives present - NVMe + SSD: NVMe drives are configured as a write-only cache for the SSD drives - NVMe + HDD: NVMe drives are read/write cache - NVME + SSD + HDD: NVME are write-only for the SSD drives and read/write for HDD drives - SSD + HDD: SSD drives are read/write cache

Microsoft defined two deployment scenarios for Storage Spaces Direct: - Disaggregated which creates two separate clusters, one of which is a Scale-out File Server dedicated to storage, essentially functioning as a SAN. This solution requires the [DCB][DCB] role for traffic management. At least two 10Gbps Ethernet adapters are recommended per node, preferably adapters that use RDMA. - Hyper-converged, where a single cluster node hosts VMs and storage. This solution is much less expensive because it requires less hardware and generates much less network traffic, but storage and compute can't scale independently: adding a node to storage necessarily entails adding one to the Hyper-V hosts, and vice versa.

Storage Replica

Storage Replica supports one-way replication between standalone servers, between clusters, and between storage devices within an [asymmetric (stretch) cluster][asymmetric cluster]. - Synchronous replication is possible when the replicated volumes can mirror data immediately, ensuring no data loss in case of failover - Asynchronous replication is preferable when the replication partner is located over a WAN link

Storage Replica improves on DFS Replication, which is exclusively asynchronous and file-based, by using SMBv3 (port 445). Storage Replica requires two virtual disks, one for logs and one for data, which are the same size for each replication partner, and all the physical disks must use the same sector size.

WSUS

Windows Server Update Services (WSUS) can be configured from the command-line with wsusutil.exe.

There are 5 basic WSUS architecture configurations

  • Single WSUS Server downloads updates from Microsoft Update, and all the computers on the network download updates from it. A single server can usupport up to 25,000 clients.
  • Replica WSUS Servers: a central WSUS server downloads from Microsoft Update, and after approval the updates are distributed to downstream servers at remote locations.
  • Autonomous WSUS Servers: a central WSUS server downloads from Microsoft Update, all of which are distributed to remote servers; each remote site's administrators are individually responsible for evaluating and approving updates.
  • Low-bandwidth WSUS Servers at remote sites download only the list of approved updates, which are then retrieved from Microsoft Update over the Internet, minimizing WAN traffic.
  • Disconnected WSUS Servers have updates imported from offline media (DVD-ROMs, portable drives, etc), utilizing no WAN or Internet bandwidth whatsoever.

When a computer first communicates with a WSUS server, it is added to the All Computers and and Unassigned Computers group automatically, which is created by default.

Windows Server Backup

To back up a VM without any downtime, integration services must be installed and enabled, and all disks must be basic disks formatted with NTFS.

Windows Server Backup - System state includes boot files, Active Directory files, SYSVOL (when run on a DC), the registry, and other data. - System reserved is a special partition containing Boot Manager and Boot Configuration data.

πŸ“˜ Glossary

adfind

Query the schema version associated with Active Directory [Desmond][Desmond2009]: 53

adfind -schema -s base objectVersion

adprep

Prepare Active Directory for Windows Server upgrades. Must be run on the Infrastructure Master role owner with the flag /domainprep. [Desmond][Desmond2009]: 29

arp

  a     d                             s              

bcdedit

Change Windows bootloader to Linux, while dual booting

::Manjaro
bcdedit /set {bootmgr} path \EFI\manjaro\grubx64.efi

::Fedora
bcdedit /set {bootmgr} path \EFI\fedora\shim.efi
Enable or disable Test Signing Mode ref
bcdedit /set testsign on
bcdedit /set testsign off

bootrec

Windows Recovery Environment command that repairs a system partition

Use when boot sector not found

bootrec /fixboot
Use when BCD file has been corrupted
bootrec /rebuildbcd

cmdkey

add delete generic list pass smartcard user

Add a user name and password for user Mikedan to access computer Server01 with the password Kleo docs.microsoft.com

cmdkey /add:server01 /user:mikedan /pass:Kleo

dism

Add-Driver Add-Package Add-ProvisionedAppxPackage Append-Image Apply-Image Apply-Unattend Capture-Image Cleanup-Image Commit-Image Disable-Feature Enable-Feature Export-Driver Export-Image Get-Driverinfo Get-Drivers Get-Featureinfo Get-Features Get-ImageInfo Get-MountedImageInfo Get-Packageinfo Get-Packages Get-ProvisionedAppxPackages List-Image Remount-Image Remove-Driver Remove-Image Remove-Package Remove-ProvisionedAppxPackage Set-ProvisionedAppxDataFile Unmount-Image

Mount an image Zacker: 71

dism /mount-image /imagefile:$FILENAME /index:$N /name:$IMAGENAME /mountdir:$PATH
Practice Labs
dism /mount-wim /wimfile:c:\images\install.wim /index:1 /mountdir:c:\mount
Add a driver to an image file that you have already mounted Zacker: 72
dism /image:$FOLDERNAME /add-driver /driver:$DRIVERNAME /recurse
Commit changes and unmount the image Zacker: 75
dism /unmount-image /mountdir:c:\mount /commit
Determine exact name of Windows features that can be enabled and disabled Zacker: 75
dism /image:c:\mount /get-features
Scan an image, checking for corruption
dism /Online /Cleanup-Image /ScanHealth
Check an image to see whether any corruption has been detected
dism /Online /Cleanup-Image /CheckHealth
Repair an offline dicsk using a mounted image as a repair source
dism /Image:C:\offline /Cleanup-Image /RestoreHealth /Source:C:\test\mount\windows
Zacker: 71-75
dism /mount-image /imagefile:C:\images\install.wim /index:1 /mountdir:C:\mount
dism /add-package /image:C:\mount /packagepath:C:\updates
dism /add-driver /image:C:\mount /driver:C:\drivers\display.driver\nv_dispi.inf
dism /commit-image /image:C:\mount
dism /unmount-image /image:C:\mount

djoin

Perform an offline domain join for a Nano Server Zacker: 46

djoin /provision /domain practicelabs /machine PLABNANOSRV01 /savefile .\odjblob
Load the odjblob file created offline on the Nano Server.
djoin /requestodj /loadfile c:\odjblob /windowspath c:\windows /localos

dnscmd

Replicate an AD-integrated DNS zone to specific DCs ref

dnscmd . /CreateDirectoryPartition FQDN
Enable GlobalNames zone support
dnscmd <servername> /config /enableglobalnamessupport 1
Observe status of socket pool
dnscmd /info /socketpoolsize
Configure DNS socket pool size (0 through 10,000)
dnscmd /Config /SocketPoolSize <value>

dsquery

Find the Active Directory Schema version from the command-line ref pwsh

dsquery * cn=schema,cn=configuration,dc=domain,dc=com -scope base -attr objectVersion"

JEA

Just Enough Administration (JEA) allows special remote sessions that limit which cmdlets and parameters can be used in a remote PowerShell session. These are implemented as restricted endpoints, to which only members of a specific security group can gain access. This offers a way to administer remote servers and move away from the traditional method using RDP.

net

Map a network location to a drive letter Practice Lab

net use x: \\192.168.0.35\c$
Stop/start a service
net stop dns
net start dns

netdom

Rename a computer

netdom renamecomputer %computername% /newname: newcomputername
Join a computer to a domain cf. Add-Computer, Zacker: 21
netdom join %computername% /domain: domainname /userd: username /password:*

netsh

Enable port forwarding ("portproxy") to a WSL2 distribution (src)

netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=2222 connectaddress=172.23.129.80 connectport=2222
Configure DNS to be dynamically assigned
netsh interface ip set dns "Wi-Fi" dhcp
Delete Wi-Fi profiles
netsh wlan delete profile name=*
Turn off Windows firewall
netsh advfirewall set allprofiles state off
Enable firewall rule group
netsh advfirewall firewall set rule group=”File and Printer Sharing” new enable=yes
Show Wi-Fi passwords (src
netsh wlan show profile wifi key=clear
Check/reset WinHTTP proxy
netsh winhttp show proxy
netsh winhttp reset proxy

ntdsutil

Used to transfer FSMO roles between domain controllers. [Desmond: 30][Desmond2009]

regsvr32

Register a DLL dependency in order to enable the Active Directory Schema MMC snap-in on a DC [Desmond][Desmond2009]: 54

regsvr32 schmmgmt.dll

route

                                p                    
print add change delete

Basic usage

route add 192.168.2.1 mask (255.255.255.0) 192.168.2.4

runas

env netonly profile/no profile savecred showtrustlevels smartcard trustlevel user:

Settings

appsfeatures personalization printers windowsupdate

about activation apps-volume appsforwebsites assignedaccess autoplay backup batterysaver bluetooth camera clipboard colors connecteddevices cortana crossdevice datausage dateandtime defaultapps delivery-optimization developers deviceencryption devices-touchpad display easeofaccess-display emailandaccounts findmydevice fonts keyboard lockscreen maps messaging mobile-devices mousetouchpad multitasking network network-wifi nfctransactions nightlight notifications optionalfeatures otherusers pen personalization-background personalization-colors personalization-start personalization-start-places phone powersleep privacy project proximity quiethours quietmomentsgame quietmomentspresentation quietmomentsscheduled recovery regionformatting regionlanguage remotedesktop savelocations screenrotation signinoptions signinoptions-launchfaceenrollment sound speech speech startupapps storagepolicies storagesense surfacehub-accounts surfacehub-calling surfacehub-devicemanagenent surfacehub-sessioncleanup surfacehub-welcome sync tabletmode taskbar themes troubleshoot typing usb videoplayback wheel windowsdefender windowsinsider workplace yourinfo

sfc

sfc /scannow

shutdown

Immediate restart

shutdown /r /t 0
Log off
shutdown /L

slmgr

ato dli dlv ipk rearm upk xpr

sysdm

2 3 4 5

tracert

On Windows, this command is aliased to traceroute which is the Linux command. [Lammle][Lammle]: 112

wbadmin

enable backup get items get versions start backup start recovery start systemstaterecovery

-backupTarget -hyperv -vsscopy|-vssFull

Backup the entire drive, excluding some VMs

wbadmin enable backup -backupTarget \\backups\hostdr\temp\ -include:c: -exclude: C:\VMs\VM1.vhdx, C:\VMs\VMAR.vhd -vsscopy -quiet

Zacker: 325-326

wbadmin get versions
wbadmin get items -version: 11/14/2016:05:09
wbadmin start recovery -itemtype:app items:cluster -version:01/01/2008-00:00
Zacker: 422
wbadmin start systemstaterecovery -version:11/27/2016-11:07
wbadmin get versions

wdsutil

initialize-server remInst

wdsutil /initialize-server /remInst:"D:\RemoteInstall"

winrm

# List all WinRM listeners  
winrm enumerate winrm/config/listener

# Display WinRM configuration
winrm get winrm/config

# Add an address to Trusted Hosts list
winrm set winrm/config/client @{TrustedHosts="192.168.10.41"}

winver

wmic

bios logicaldisk memorychip os path

Recover Windows product key [fossbytes.com][https://fossbytes.com/how-to-find-windows-product-key-lost-cmd-powershell-registry/]

wmic path softwarelicensingservice get OA3xOriginalProductKey
Display information about installed RAM
wmic memorychip list full
List all objects of type Win32_LogicalDisk using that class's alias logicaldisk. [Desmond][Desmond2009]: 642 pwsh
wmic logicaldisk list brief
Recover serial number of a Lenovo laptop [pcsupport.lenovo.com][https://pcsupport.lenovo.com/us/en/solutions/find-product-name]
wmic bios get serialnumber
Display BIOS version
wmic bios get biosversion
Display operating system architecture
wmic os get osarchitecture
Display operating system type (48 is Windows 10)
wmic os get operatingsystemsku

[wsl][msdocs:wsl.exe]

                        l             s t            \ export import set-default-version

[wsusutil]][msdocs:wsusutil.exe]

Specify a location for downloaded updatesZacker: 393

C:\Program Files\Update Services\Tools\wsusutil.exe postinstall content_dir=d:\wsus
Specify SQL server, when not using the default WID database
C:\Program Files\Update Services\Tools\wsusutil.exe postinstall sql_instance_name="db1\sqlinstance1"- content_dir=d:\wsus

wt

        d                       p                    
split-pane focus-tab

Open the default Windows Terminal profile and also an Ubuntu WSL tab [bleepingcomputer.com][https://www.bleepingcomputer.com/news/microsoft/windows-terminal-09-released-with-command-line-arguments-and-more/]

wt; new-tab -p "Ubuntu-18.04"
Open a split pane of the default profile in the D:\ folder and the cmd profile in the E:\ folder [bleepingcomputer.com][https://www.bleepingcomputer.com/news/microsoft/windows-terminal-09-released-with-command-line-arguments-and-more/]
wt -d d:\ ; split-pane -p "cmd" -d e:
Open the default profile and an Ubuntu WSL profile, but with the first tab focused [bleepingcomputer.com][https://www.bleepingcomputer.com/news/microsoft/windows-terminal-09-released-with-command-line-arguments-and-more/]
wt ; new-tab -p "Ubuntu-18.04"; focus-tab -t0

xcopy

Copy from one directory to another Practice Lab

xcopy /s c:\inetpub\wwwroot c:\nlbport