Quantcast
Channel: The SharePoint Farm
Viewing all 184 articles
Browse latest View live

SharePoint 2013 SP1 and Heartbleed

$
0
0

As many have noted, SharePoint 2013 Service Pack 1 downloads were pulled. This is due to a potential issue with upgrading from Service Pack 1 to any future update. This does not impact day-to-day operations and should not be a cause of concern for those currently running a farm upgraded to Service Pack 1. Microsoft will release a fix to address this particular issue so administrators may update to post-Service Pack 1. The ISOs that include Service Pack 1 (“SharePoint 2013 with Service Pack 1″) are not impacted by this particular issue. Those ISOs can be found on the MSDN Download Center and the Volume License website. Now as for Hearbleed, because Microsoft does not use the OpenSSL crypto library, Microsoft products and properties (with the exception of Yammer) will not be vulnerable to this particular issue. This, by nature, extends to SharePoint. However, if the same SSL certificate was used with an IIS site as well as a site using the vulnerable OpenSSL crypto library, the certificate will still need to be revoked and reissued, and of course any user associated with the site should change their password(s). Unfortunately this particular bug is very bad for users. They cannot know if a web server is vulnerable, and cannot know if and when the vulnerability is resolved. As always, it is best to regularly rotate passwords and use unique passwords on a site-by-site basis. I’d suggest using an application like LastPass, so you don’t necessarily have to “remember” each password. Patch Tuesday has passed, and as you may have noted, Patch Tuesday typically means Cumulative Updates for SharePoint. We’re still waiting for the SharePoint 2013 CUs to be released. Hopefully the SP1 update issue will also be addressed with this release. On another note, when the April 2014 CU is released for SharePoint 2013, we’ll have official support for SQL Server 2014 RTM! The blocking issue was due to Forefront Identity Manager (when is FIM not the blocking issue?).

The post SharePoint 2013 SP1 and Heartbleed appeared first on Nauplius.


SharePoint 2013 Service Pack 1 Re-Released

$
0
0

SharePoint 2013 SP1 has been re-released with updated binaries that support future updates. The links to the ‘v2′ SP1 can be found in the previous SP1 post, SharePoint 2013 Service Pack 1. This new SP1 can be installed over-the-top of a previous SP1 deployment. Don’t forget to run the Config Wizard after installation.

The post SharePoint 2013 Service Pack 1 Re-Released appeared first on Nauplius.

SharePointBAC 1.1.1 Release

$
0
0

SharePointBAC is a set of PowerShell cmdlets for managing and reporting on SharePoint Farm backups, created by Sean McDonough. This update provides a fix for reporting when the Start and/or Finish time is not available. SharePointBAC is a farm solution available for SharePoint 2010 and SharePoint 2013. Download this release from the SharePointBAC project page. Also check out my other solutions in my portfolio!

The post SharePointBAC 1.1.1 Release appeared first on Nauplius.

SharePoint Database Availability Group Cmdlets

$
0
0

The SharePoint 2013 April 2014 Cumulative Update includes three new SQL Availability Group cmdlets: Get-AvailabilityGroupStatus Add-DatabaseToAvailabilityGroup Remove-DatabaseFromAvailabilityGroup These cmdlets allow you to manage the SQL Availability Group. First, a few notes about the Availability Group. The AG must be the connection in use by the Configuration database as checks for the Database Availability group will be executed against this connection. If the Database Availability group happens to be on another set of SQL Servers that is not the Configuration database SQL server connection, those checks will fail. The Availability Group must have a Database Availability Group Listener. Again, without it, the checks will fail. To confirm that the check will succeed, modify the AGName parameter in this T-SQL query and run it against the SQL instance used by the Configuration database: [crayon-53746dedb1af0409516253/] Let’s start with a brand new farm. Two SQL Servers, SQLAO1 (Primary), and SQLAO2 (Secondary) in a SQL Availability Group (Synchronous mode) named SPAG (DNS name is SPAG.nauplius.local). A dummy test database has been created in order to set up the AG, but may be deleted. The SharePoint server is SharePoint 2013 SP1 with April 2014 CU. No SQL Alias will be used for this set up since we have database mobility between SQL Servers due to AlwaysOn. Starting with an elevated SharePoint Management Shell… [crayon-53746dedb1b02707348632/] Because SQLAO1 is the Primary Replica, the Administration and Configuration databases are present on it, but not SQLAO2. The next step is where the AlwaysOn fun begins. Validate a good backup location is available for each SQL Server (note that your SQL Server service account(s) will need access to this location). Run the following command from the SharePoint Management Shell: [crayon-53746dedb1b0d222138136/] And that is it! Your Configuration and Administration databases are now part of the SQL Availability Group (in a real sense), there was no having to flip-flop SQL Aliases around, the databases are automatically backed up and synchronized, as are the SQL Logins! This also eliminates the need for a Database Administrator to assist with getting SharePoint databases into an Availability Group, because it can now be done by the SharePoint Administrator! You can also now see the status of the Availability Group: [crayon-53746dedb1b16592217555/] If you run Add-DatabaseToAvailabilityGroup, but the cmdlet errors out, an Availability Group object may be created within SharePoint regardless. To fix this, simply run: [crayon-53746dedb1b20880390381/] For databases that are part of the SharePoint SPAvailabilityGroup (rather than simply being in an AG as done from SQL Management Studio) will also be aware of that through their object. [crayon-53746dedb1b28860472426/] On the SharePoint SPAvailabilityGroup object itself, we can also force a failover to one of other nodes in the Availability Group: [crayon-53746dedb1b31740701911/] Yep, as a SharePoint Administrator, I do not have to touch SSMS or SQL PowerShell to execute an AG failover anymore! If, down the road, you create a new database (Content or Service Application) and need to individually add it to a new or existing Availability Group, again use the Add-DatabaseToAvailabilityGroup cmdlet. Pay attention to what the Primary replica is, as the last backup path will be used if the FileShare switch is not specified. So for my above example where my FileShare path is on SQLAO1, but I’ve failed over to SQLAO2, if I attempt to use the Add cmdlet, the backup will fail with Access Denied. Instead, I created a new share on SQLAO2 and ran: [crayon-53746dedb1b3a924351667/] Of course, you could use a common CIFS location that the SQL Server service accounts had NTFS Modify access to, as well. The last cmdlet, Remove-DatabaseFromAvailabilityGroup, will remove databases from the SharePoint SPAvailabilityGroup object. If you attempt to run the cmdlet against a database that currently exists within SharePoint (for example, is still attached to a Web Application), the cmdlet will fail with an error similar to: [crayon-53746dedb1b43513317005/] This is a little bit misleading (and there is a slightly better error in the ULS logs). If you use the -Force parameter, what it does is it removes it from the Availability Group, deleting the database from the Secondary nodes of the SQL Availability Group (by default), but the database will still be attached to SharePoint. If you need to remove a database from an Availability Group, but wish to keep copies of the database on the Secondary nodes, use the -KeepSecondaryData parameter. The database, on the Secondary nodes, will enter a Not Synchronizing state, while the database with the active connection will no longer display a synchronizing state as it is no longer part of the Availability Group. One potential bug to note is that it appears the Secondary replicas do not have the Max Degree of Parallelism (MAXDOP) set to 1 at any point by SharePoint. Make sure to set it manually prior to deploying SharePoint databases as this can cause certain operations to fail, such as creating new Content Databases.

The post SharePoint Database Availability Group Cmdlets appeared first on Nauplius.

Announcing the Release of Nauplius.SharePoint.BlobCache 1.8

$
0
0

This is a major “behind the scenes” release, with some new additions for SharePoint 2010 and SharePoint 2013. In the short-lived 1.6.1 release, a new setting for SharePoint 2013 was added, debugMode. Included in this release as well, this function adds a “hit-count” in the response headers, which can be inspected with a tool such as Fiddler. For example, with debugMode set to true, when using Fiddler with an asset that has been cached in the BLOB cache, you’ll see the following in the response header:   For both SharePoint 2010 and 2013, there is now a counter displayed for how many times the BLOB cache has been flushed on the Web Application. In addition, both versions now have a “Restore Defaults” button. This button removes all settings made through the BLOB cache tool, or manually and reverts them to the out-of-the-box settings. You can download the solution and find the documentation at the project site.

The post Announcing the Release of Nauplius.SharePoint.BlobCache 1.8 appeared first on Nauplius.

SharePoint 2013 April 2014 CU Claims Conversion Bug

$
0
0

There is a fatal bug in the April 2014 Cumulative Update as well as MS14-022 for SharePoint 2013 when attempting to use the Convert-SPWebApplication cmdlet. With this cmdlet came a large code base change in order to add significant new functionality, but it unfortunately broke the Classic (Windows) to Claims migration functionality. This cmdlet also comes with some new parameters, undocumented in either the cmdlet help (perhaps to be updated via Update-SPHelp in the future) or on TechNet. For reference, it now has a -From and -To switch: -From accepts the following string values: Legacy, Claims-Windows, Claims-Trusted-Default -To accepts the following string values: Claims, Claims-Windows, Claims-Trusted-Default, Claims-SharePoint-Online For the purposes of this post, the values in use are -From Legacy -To Claims -RetainPermissions When the migration process starts, the migrator loops through each Content Database, and for each online Content Database, each Site Collection, and finally each User/Group. In this example, we’re taking NT AUTHORITY\Authenticated Users and converting to c:0!.s|windows. The major break appears to occur in Microsoft.SharePoint.Administration.SPWebApplicationMigrator in the method GetEntityMigrationDataFromOldCallBack(SPMigrationEntity entity). Near the top of the method, a value is set: [crayon-538c3c27da8f4974780610/] This is a fair thing to set, assume we’re going to fail the migration, and make sure we’ll succeed for each user instead. Good idea, honestly. And for this user, we do actually succeed at making a valid claim, and I’ll show you that. In this same method, we pass through this portion where the conversion process from Classic to Claims is actually performed, with no errors: [crayon-538c3c27da904798525530/] Where the bug appears to be happening is after the conversion. That failure variable is never updated to Success. [crayon-538c3c27da90e946484441/] Here we can see that the str2 variable is not null and skipUser is also not set. AlreadyMigrated is also not set, but in addition, the failure variable has not been updated to Success either, hence the migration never is allowed to move forward. Here is another view after having left the last method where the failure variable should have been updated to Success: In the end, what should happen is that we skip over a bunch of other validation methods to to make sure that SPMigrateEntityCallbackResult does not equal Success, and finally if the entity does equal Success, add it to the Migration Cache to be migrated, and finally execute the migration. In this example, I have the following users in my UserInfo table. The first user you see in Claims format is placed there automatically by the conversion process in order to start the conversion process. [crayon-538c3c27da91b183425366/] Next, running through each user, manually updating from Failure to Success after the internal conversion (again, the conversion in SQL has not happened yet) from Classic to Claims using the following method in the SPMigrateEntityCallbackResult MigrateEntity(SPClaimMigrationContext context, SPMigrationEntity entity) method: Each user is added to a new “migration cache”. I didn’t dive too deeply into this cache, but needless to say, the migration is successful. Examining the UserInfo table again, post manual variable change: [crayon-538c3c27da925805549047/] After the SQL query completes execution, from here I am able to log in as any specified user that has been migrated successfully. Otherwise, if the Web Application has been migrated to Claims but the users have not, the user will receive Access Denied in their Windows (“Classic”) login format. Unfortunately the old SharePoint 2010 MigrateUsers($true) method (deprecated) also runs through the same code, so it will produce the same results. In the end, we’ll need to wait for a patch from Microsoft on this issue.

The post SharePoint 2013 April 2014 CU Claims Conversion Bug appeared first on Nauplius.

MS14-022 Known Issues

$
0
0

There are a few known issues with the security update, MS14-022 (Vulnerabilities in Microsoft SharePoint Server Could Allow Remote Code Execution (2952166)). Most of these issues have been observed in SharePoint 2013, although some of them may also be present in SharePoint 2010. MS14-022 appears to contain many SP1 and/or post-SP1 binaries. For SharePoint Server 2013 Pre-SP1 farms: 1) The Office 365 links in Central Administration are now present, unsure of their functionality status. 2) An error appears in the SharePoint Management Shell when it is opened. This has been previously associated with installing Foundation SP1 on Server. [crayon-538c3c27d93f0970266582/] 3) “%25″ is added to the URL when searching within the farm. 4) A potential issue with filtering with the SQL Server Reporting Services Webpart. 5) Another confirmed issue which I will post about as soon as I’m able to. For SP1 and Pre-SP1 farms: 1) The April 2014 CU Classic to Claims conversion bug with Convert-SPWebApplication bug is present in MS14-022. While this security update is classified as Critical, this is a patch I would run through extended testing in a non-production farm if the production farm is currently not running Service Pack 1.

The post MS14-022 Known Issues appeared first on Nauplius.

SharePoint 2010 June 2014 Cumulative Updates

$
0
0

The June 2014 Cumulative Update for SharePoint 2010 has been released. SharePoint Foundation: http://support.microsoft.com/kb/2880975 SharePoint Server 2010: http://support.microsoft.com/kb/2880972 Project Server 2010: http://support.microsoft.com/kb/2880974 Office 2010 June 2014 Cumulative Updates: http://support.microsoft.com/kb/2970263

The post SharePoint 2010 June 2014 Cumulative Updates appeared first on Nauplius.


SharePoint 2013 June 2014 Cumulative Updates

$
0
0

SharePoint Foundation: http://support.microsoft.com/kb/2881063 SharePoint Server 2013: http://support.microsoft.com/kb/2881061 Project Server 2013: http://support.microsoft.com/kb/2881062 Office Web Apps Server 2013: http://support.microsoft.com/kb/2881051 Office 2013 June 2014 Cumulative Updates: http://support.microsoft.com/kb/2970262

The post SharePoint 2013 June 2014 Cumulative Updates appeared first on Nauplius.

Workaround for April 2014 CU and MS14-022 Double Encoding Bug

$
0
0

The April 2014 Cumulative Update and MS14-022 introduced a “double encoding” issue that caused search results with %5C (the “\” character) to become double encoded (represented as “%255″). This caused broken links. This bug persists in the June 2014 Cumulative Update. As a workaround, you can edit a few files in the 15 hive. This is a temporary workaround where the issue is a critical impact to your business. You will want to edit two files, Search.ClientControls.js and Search.ClientControls.debug.js. Both files are located in C:\Program Files\Common Files\microsoft shared\Web Server Extensions\15\TEMPLATE\LAYOUTS\ and I’d strongly suggest backing both files up to an alternate location so they can be restored prior to a final fix being released by Microsoft. Edit Search.ClientControls.debug.js first. This file is used when debugging in the browser, so it is easier to read and will help you understand what needs to be edited in the non-debug version. Find the following block of text: [crayon-53a0cf8f4b98d699456620/] The encodeURI is a native function that is causing the double encoding issue. This is a fairly simple fix, just remove the encodeURI function and save the file. The block of text will look like this: [crayon-53a0cf8f4b9a0088213157/] The next step is to edit the non-debug version of the file. This one you’ll want to be careful with as it isn’t formatted as nicely. Search for “encodeURI” in Search.ClientControls.js: [crayon-53a0cf8f4b9ab032277249/] Next, remove encodeURI, carefully removing the opening parenthesis and closing parentheses for both method calls: [crayon-53a0cf8f4b9b4798504328/] Refresh your browser (you may need to clear your browser’s cache), and the URLs, at least for People searches, will be correct. Other scenarios should be correct, but my lab is not set up to test them.

The post Workaround for April 2014 CU and MS14-022 Double Encoding Bug appeared first on Nauplius.

Debugging SharePoint Slide deck

$
0
0

I presented Debugging SharePoint at the June Puget Sound SharePoint Users Group last Thursday. Below is the slide deck from the presentation. Yes, we had a few Microsoft employees present and live debugging sessions ;-) In addition to the slides, I have an article on this topic which I will be updating soon to include new, expanded content with additional tips from this presentation.    

The post Debugging SharePoint Slide deck appeared first on Nauplius.

Hyper-V Private Networks for SharePoint

$
0
0

Have you ever wanted a mobile SharePoint platform, but couldn’t afford Azure or other “cloud” platforms, yet could afford a 32GB laptop with a secondary SSD drive for Hyper-V? Then this post is for you! As you know SharePoint 2013 requires as a Domain Controller. What does a Domain Controller require? A static IP address, of course! But what if you have a laptop, are mobile, and need to allow your SharePoint (or other servers) on Hyper-V to reach the Internet, such as to reach the public SharePoint App catalog? In order to have a static IP address, the solution is to use an Internal Virtual Switch, but in order to have Internet access, you need an External Virtual Switch. So how does one go about reconciling these needs? Linux! So I’ll admit it. I’ve been using Slackware since 1997, and RedHat from roughly around that time. I used Gentoo briefly, and now use CentOS exclusively for my Linux needs. If it can do one thing well here, it can make a great proxy/firewall device for us to use within Hyper-V to route traffic. Here is what you’ll need. Create two vSwitches in Hyper-V. An External vSwitch and Internal vSwitch. Attach your Windows Servers to the Internal vSwitch and assign them to the [crayon-53aa109c8b5dc292265556-i/] network (or any other internal network of your choosing — it is best to not conflict with any other non-public network that may exist on the External vSwitch). It doesn’t matter what IPs you assign your Windows Servers, but to keep things clear, let’s reserve 192.168.0.1 for the gateway IP address. Download [crayon-53aa109c8b5ff314349960-i/] from one of the CentOS Mirrors. Create a new Generation 1 VM in Hyper-V for CentOS, attach the first NIC to the External vSwitch, and the second NIC to the Internal vSwitch. I’d strongly recommend not only using this order, but creating the NICs prior to building the VM — there is additional manual work that must be done if you add the NIC post-installation! Allocate at least 512MB RAM and 1 vCPU should be plenty for non-production use. Attach the CentOS ISO to the VM, start it up, and run the installation. For the most part, choose the defaults. Enter a sane root password, and for partitioning, use the entire drive. The entire installation should only take a couple of minutes, and then the system will request a reboot. Note that the Microsoft Linux Integration Components are baked into the Linux kernel, so there is no need to install them. The next step is to configure networking. The network scripts are located in [crayon-53aa109c8b642159827576-i/] and are named [crayon-53aa109c8b65b028363496-i/] (External vNIC) and [crayon-53aa109c8b671844195781-i/] (Internal vNIC). Because this is a minimal install of CentOS, our text editor of choice is going to be vi. This is not the easiest text editor to use, but once you get used to it, it makes sense (I promise). I recall once being on campus at Microsoft in Redmond as part of a high school job shadow and employees argued over what editor was better vi or emacs. Clearly those who argued for emacs were wrong and probably worked on such projects as Microsoft Bob 2.0. Let’s get started. If you ever make a mistake with [crayon-53aa109c8b689165341285-i/] and just want to back out of making a change, just use the following sequence of characters: [crayon-53aa109c8b69f304426621/] That will you out of any mode and quit without saving any changes to the file. Ok, first thing is first, let’s get an IP address from the external DHCP server on the network. [crayon-53aa109c8b6b4421613170/] Use the arrow keys to go to the [crayon-53aa109c8b6c9698400544-i/] line, go to n in no, and hit the [crayon-53aa109c8b6df092210802-i/] key to erase [crayon-53aa109c8b6f3944307684-i/]. Next, hit [crayon-53aa109c8b708813593939-i/] for Insert, use the right arrow to put your cursor to the right of the = sign, and type in [crayon-53aa109c8b71d945545501-i/]. Hit the [crayon-53aa109c8b732045683126-i/] key (notice the [crayon-53aa109c8b747995127699-i/] at the bottom of the screen disappears). Next, type in [crayon-53aa109c8b75c107474968-i/] which means “write quit”. This will save the file, and exit [crayon-53aa109c8b771786841762-i/]. Now, type [crayon-53aa109c8b786764800640-i/] and the prompt. This will request an IP address from the DHCP server on the network. If you were successful, you can type in [crayon-53aa109c8b79b305020824-i/] and you should see two entries, one for [crayon-53aa109c8b7b0465224481-i/] (our External vNIC) and another for [crayon-53aa109c8b7c5353409376-i/] (loopback). Our External vNIC will have a DHCP assigned IP address, and we should be able to [crayon-53aa109c8b7da275186797-i/] (type [crayon-53aa109c8b7ee987549573-i/] to cancel). Great! Now, onto assigning the internal IP address on [crayon-53aa109c8b804462927830-i/]. [crayon-53aa109c8b818001242613/] Using the same procedure as above, change [crayon-53aa109c8b82e223261280-i/]. Change [crayon-53aa109c8b842088997044-i/] from [crayon-53aa109c8b857761742222-i/] to [crayon-53aa109c8b86c987731924-i/] (this means we’re going to put this interface in a static IP mode). When at the end of the [crayon-53aa109c8b881223271340-i/] line, hit the [crayon-53aa109c8b896020443640-i/] key, and type in the following lines: [crayon-53aa109c8b8ab591856458/] Again, hit [crayon-53aa109c8b8c0932533661-i/], then [crayon-53aa109c8b8d4224678581-i/] to save and quit [crayon-53aa109c8b8e9134727017-i/]. At the prompt, type [crayon-53aa109c8b8fe325270426-i/] which will bring the [crayon-53aa109c8b912000558817-i/] interface online. The next step will be to enable IP Forwarding on all of our network interfaces. This is done in one of two ways, the first way is temporary until reboot, and the second way makes it permanent. Temporary: [crayon-53aa109c8b928303743753/] Permanent: [crayon-53aa109c8b93d716498009/] Insert a new line at any point, adding: [crayon-53aa109c8b951060537470/] Use [crayon-53aa109c8b966438677546-i/] to save and quit [crayon-53aa109c8b97a676739740-i/]. To commit the change, from the command line, use [crayon-53aa109c8b98f710492981-i/] to restart CentOS. Log back in as [crayon-53aa109c8b9a4736608079-i/]. The next step will be to configure iptables. This is what will allow us to forward traffic from the internal network to the outside world. Fortunately we can do this through the iptables command line interface! Use the following commands: [crayon-53aa109c8b9ba390607995/] Then, save the ruleset and restart the iptables service: [crayon-53aa109c8b9cf514921597/] The last step? [crayon-53aa109c8b9e4409636698/] This will update all of the packages to the latest version from the distribution’s repository. You may need to restart for some of the packages to update completely. But that is it! Given your Windows Servers on the Internal vSwitch are using the IP address assigned to the Internal vNIC of the Linux VM as their Gateway, you should be able to ping to public IP addresses. Setting up a Domain Controller should also allow you to resolve public domain names, as well (given other network restrictions are not in place). […]

The post Hyper-V Private Networks for SharePoint appeared first on Nauplius.

Configuring SharePoint with PowerShell

$
0
0

Configuring SharePoint with PowerShell is quite easy, rather than running the Configuration Wizard post-installation. The primary advantage is to give you control over the Administration database. This method works with SharePoint 2010 and SharePoint 2013. After installing SharePoint, run the SharePoint Management Shell as Administrator. [crayon-53aa109c89ca7052694042/] TechNet also has this information, I just had to scroll too much. And now I have a source to directly copy and paste from :-)

The post Configuring SharePoint with PowerShell appeared first on Nauplius.

SharePoint with Apache mod_proxy

$
0
0

The Apache Software Foundation provides a reverse proxy module named mod_proxy and mod_ssl (which extends functionality into SSL). This is a non-authenticating reverse proxy similar to function to Microsoft’s IIS Application Request Routing module. This article will cover getting a single Web Application on a single SharePoint server behind the reverse proxy over SSL on port TCP/443. We will be starting from the same VM as used in the previous blog post about using CentOS and iptables, so familiarize yourself with that before continuing as that will be the base configuration moving forward. By now I will assume that you’re familiar enough with vi to know how to save files. In addition, I will assume that the Domain Controller on the Hyper-V Internal vSwitch is at 192.168.0.2 and the SharePoint server is at 192.168.0.3. In addition, our SharePoint server is going to have an SSL certificate of sharepoint.corp.nauplius.net, and of course that is what our Web Application will be. This particular SSL certificate has been issued from StartSSL (it’s free). The first step will be to modify the networking on the CentOS virtual machine. [crayon-53aa109c8419a986885131/] Add a new line: [crayon-53aa109c841c5502302359/] This will disable the DHCP client ([crayon-53aa109c841dd338865141-i/]) from automatically adding the DHCP servers DNS information to [crayon-53aa109c841f2518332192-i/] (which is what allows the VM to automatically resolve domain names). Instead, we’re going to manage that using the internal Domain Controller over [crayon-53aa109c84207703817120-i/]! [crayon-53aa109c8421c265714289/] Add a new line: [crayon-53aa109c84231688959501/] Next, let’s edit the resolv.conf [crayon-53aa109c84246884038909/] Change it so it reads: [crayon-53aa109c8425b673316952/] The next step is to bring up and down both interfaces. [crayon-53aa109c8426f566236463/] Once both interfaces are back up, your Windows Servers should continue to have name resolution connectivity to the Internet. In addition, [crayon-53aa109c84284192129044-i/] should show the static settings you inputted ([crayon-53aa109c84299491107094-i/]). In addition, you should be able to ping, from the CentOS VM, to the internal servers by IP and FQDN. Next, we need to install a few packages on CentOS. Apache (for [crayon-53aa109c842ae708278815-i/]), [crayon-53aa109c842c2450609273-i/] (for SSL support), and [crayon-53aa109c842d7482936443-i/] (for [crayon-53aa109c842eb215355419-i/], similar to [crayon-53aa109c84300126018806-i/] on Windows). [crayon-53aa109c84314690541722/] We now need to add a couple of new firewall rules: [crayon-53aa109c84329508786623/] And save the rules: [crayon-53aa109c8433e838669237/] This allows iptables to accept SSL traffic to the reverse proxy (locally). The next section will not be quite as easy. We will be dealing with SSL certificates on the CentOS VM. In order to do this properly, we’ll need to upload certificates to the CentOS VM and unlike Windows, it isn’t quite the point-and-click affair with PFX. Most SSL certificate vendors will offer an unencrypted private key as well as public key file. You’ll want both of these. In addition, you’ll also want the appropriate Certificate Authority SSL certificate bundle (this contains the public certificate chain for your SSL certificate). If your SSL vendor does not offer these file types, you’ll need to use OpenSSL to convert the files to the appropriate file formats. For reference, here will be my file names: Unencrypted private key: sharepoint.key Public key: sharepoint.cer CA bundle: startssl-bundle.pem In addition to this, specifically for mod_ssl, we will need a single file that contains both the public and unencrypted private key in a single file. It should be in the following format: [crayon-53aa109c84357247904768/] You can use a program like Notepad++ to edit the sharepoint.key and sharepoint.cer files to create the new files with the public and private key contained within it. This will be the Public-Private key bundle, and saved as sharepointpubpriv.crt. Now all of these files need to be transferred to the CentOS VM. In order to do this, we’ll use a protocol called SCP. This protocol allows you to transfer files over SSH. Thanks to our default iptables rules, SSH is already open on our eth0 interface! Grab a copy of WinSCP and copy these files over to a directory (e.g. [crayon-53aa109c84370924298387-i/]). The next step is to copy over the files to the appropriate default locations. [crayon-53aa109c84385653809564/] Now, using [crayon-53aa109c8439b823282481-i/], we need to configure [crayon-53aa109c843af447269364-i/] (the primary Apache configuration file). [crayon-53aa109c843c4651077690/] Make sure the following lines exist: [crayon-53aa109c843d9983627511/] In my example, I’ve removed the [crayon-53aa109c843ee308393675-i/] as well as the default [crayon-53aa109c84402187751005-i/]. This is a reverse proxy intended to only listen on 443 and serve requests to our internal SharePoint server. There are many other settings within here that you can modify and likely should modify, but this is not an in depth lesson on Apache security. The next step is to modify the [crayon-53aa109c84418135192605-i/] file. [crayon-53aa109c8442c081764481/] [crayon-53aa109c84441591478131-i/] should be present at the top of this configuration file. Instead of providing you specific lines to set, here is the entire configuration. Again we’re using my example domain here, so adjust to fit your needs. [crayon-53aa109c84457876693197/] Make sure at the end of the [crayon-53aa109c84471575969612-i/] and [crayon-53aa109c84487169196743-i/] lines you end them with a “/” at the end of the path, otherwise relative paths will not be returned properly to the reverse proxy and you’ll see unexpected results. Once the changes are configured, run [crayon-53aa109c8449c784302891/] Any errors will be logged in the log files at [crayon-53aa109c844b0585159275-i/] and [crayon-53aa109c844c5771274274-i/]. To watch [crayon-53aa109c84503081821951-i/]  in real time, run: [crayon-53aa109c84518836538040/] The last step would be to edit the hosts file on any client computer to target, in this case, sharepoint.corp.nauplius.net at the IP of the [crayon-53aa109c8452d913623370-i/] interface. If everything goes well, we should be prompted for credentials and let right through to the SharePoint site!

The post SharePoint with Apache mod_proxy appeared first on Nauplius.

SharePoint Default Timeouts

$
0
0

Timeouts, especially unknown timeouts, can cause a lot of headaches. So I’ve started compiling SharePoint default timeouts in the “current” version of SharePoint (that is to say, I’ll do my best effort to keep it up-to-date). SharePoint Default Timeouts The Excel sheet can be found under the SharePoint Resources link, feel free to download and borrow it as needed :-) If there are any timeouts that should be added, let me know by adding a comment here, or contacting me via my blog or Twitter.

The post SharePoint Default Timeouts appeared first on Nauplius.


SharePoint 2013 July 2014 Cumulative Updates

$
0
0

The July 2014 Cumulative Update for SharePoint 2013 has been released. Note that CUs will now be released on a monthly basis, rather than bi-monthly. SharePoint Foundation: http://support.microsoft.com/kb/2882999 SharePoint Server 2013: http://support.microsoft.com/kb/2882989 Project Server 2013: http://support.microsoft.com/kb/2882990 Office 2013 April 2014 Cumulative Updates: http://support.microsoft.com/kb/2978164

The post SharePoint 2013 July 2014 Cumulative Updates appeared first on Nauplius.

SharePoint 2010 July 2014 Cumulative Updates

$
0
0

The July 2014 Cumulative Update for SharePoint 2010 has been released. SharePoint Foundation: http://support.microsoft.com/kb/2883026 SharePoint Server 2010: http://support.microsoft.com/kb/2883005 Project Server 2010:  Office 2010 July 2014 Cumulative Updates: http://support.microsoft.com/kb/2978161

The post SharePoint 2010 July 2014 Cumulative Updates appeared first on Nauplius.

SharePoint ASP.NET Compilation Errors (CS0016)

$
0
0

I encountered a rather frustrating issue with SharePoint 2013. Like most farms, this farm has a specific Domain User running the IIS Application Pool handling Web Applications. These users had a Windows Profile created (this is done by elevating the user temporarily to Administrator, then running an application such as cmd.exe or notepad.exe as that user). “Out of the blue”, when users would navigate to certain built-in pages (e.g. List Settings, or Version Settings underneath List Settings)  they would generate errors similar to the below in the Application Event Log. [crayon-53e762c136d1f717405097/] In addition, when attempting to publish SharePoint Designer Workflows in SharePoint 2010 mode, SharePoint Designer would present the message: Another form of error the end user received was: As other suggestions on the Internet indicated, rebooting the server did work, for a little while. But eventually the errors would return (don’t ask me why they wouldn’t present themselves immediately, I don’t know!). Eventually, ProcessMonitor showed that files could not be created in the user’s Temp directory running the IIS Application Pool! Lo and behold, when navigating to C:\Users\serviceaccount\AppData\Local\, the Temp directory was missing! Simply creating the Temp directory, so C:\Users\serviceaccount\AppData\Local\Temp resolved the issue immediately, no iisreset or restart required. The key here is that the user performing the compilation of ASP.NET pages needs to have a valid %Temp% path, where ever that location may be — specifically, the error message “The directory name is invalid” refers to this %Temp% location.

The post SharePoint ASP.NET Compilation Errors (CS0016) appeared first on Nauplius.

Unable to provision the Claims to Windows Token Service

$
0
0

The Claims to Windows Token Service is stopped by default. Typically, starting this service is easy. In fact, I’ve never come across a situation where it didn’t “just start”, until now! In this thread on the SharePoint TechNet forums, a user was unable to provision the Claims to Windows Token Service due to the error “Illegal characters in path”. With no further information, it is difficult to identify which path this error is referring to. However, we have .NET Reflector! So what we do is simply take a look at the full stack trace of the error. Within the stack trace on the thread, you can see [crayon-53e762c1349f6715297941-i/], so clearly we’re dealing with the system attempting to retrieve a file name. Down further in the stack trace, we come across the first mention of a SharePoint related method, [crayon-53e762c134a21564726885-i/]. This is the method we’ll reflect. When looking at the [crayon-53e762c134a3a698056539-i/] method, it is easily identifiable as to what path we’re looking for. [crayon-53e762c134a4f351331758/] Clearly we need to look at the registry entry to make sure it is valid. In this case, HKLM\SYSTEM\CurrentControlSet\services\c2tws should have an ImagePath value of [crayon-53e762c134a64208218411-i/]. In the case of the poster on TechNet, the value was [crayon-53e762c134a78404393258-i/]. Note the quotes within the path. While I’m unsure as to how the value ended up with quotes, removing the quotes allowed the service to start successfully.

The post Unable to provision the Claims to Windows Token Service appeared first on Nauplius.

The Expense of Application Pools

$
0
0

Microsoft’s current recommendation for SharePoint is to leverage a single Web Application and as few Application Pools as possible. This is due to the overhead of not only each Web Application (additional timer jobs), but IIS Application Pools as well. This article will cover the memory expenses of Web Applications running on separate IIS Application Pools, a fairly common configuration practice. This SharePoint 2013 SP1 farm runs on Windows Server 2012 R2. SQL Server 2014 is hosted on another virtual machine running Windows Server 2012 R2 Core. The farm has the default Service Instances running. In addition, no Service Applications have been created. There are four Web Applications: SP04-A, SP04-B, SP04-C, and SP04-D. SP04-A and SP04-D share the same IIS Application Pool, SP04-A. The other Web Applications have an IIS Application Pool named the same as the Web Application itself. In addition, each Web Application has a single root Site Collection, using the Team Site template. Using VMMap, we’ll be paying close attention to the processes’  Private Working Set. To give you some background, the Private Working Set is a region of the allocated memory of the process that is not shared. This region does not include memory-mapped files (files loaded from disk into memory), but may include memory allocated to those memory-mapped files. One thing you’ll notice is that a good portion of the Working Set is allocated to the Managed Heap. This is a contiguous location of reserved memory for the application, where .NET code is loaded. What we want to identify is the impact of loading each IIS Application Pool (SP04-A, SP04-B, SP04-C), and what is the impact of loading additional Web Applications into a shared IIS Application Pool (SP04-D into SP04-A). In order to monitor memory usage, from a client machine we’ll simply be going to http://hostname for each Web Application. This will spin up the Application Pool (not applicable for SP04-D). Starting with SP04-A, after an Iisreset, we can see the Private Working Set using roughly 467 MB virtual memory, with most of that being assigned to the Managed Heap (or .NET code). Adding SP04-B, the Private Working Set is roughly 379 MB: Now, SP04-C, the third Web Application with its own Application Pool. Roughly 411 MB for its Private Working Set. Here, we’re adding SP04-D. SP04-D is a separate Web Application, but uses the same Application Pool that SP04-A uses. Notice the Private Working Set has increased from 467 MB to about 526 MB, or a difference of 59 MB. That is significantly better than an entire new Application Pool running at 350 MB+! Clearly, given the choice, a single Application Pool for the Web Application is a more economical choice. In addition, each Web Application that starts up after the first Web Application has spun up the Application Pool responds to the end user in a shorter period of time when using a single IIS Application Pool. So, what about Microsoft’s recommendation of using Host-Named Site Collections? Some in the SharePoint community, and rightly so, believe that the HNSC recommendation is simply to make it ‘easier’ to migrate an on-premises installation to SharePoint Online. While clearly that is Microsoft’s end-goal, how about we take a look at it from a performance perspective? There is a new Host-Named Site Collection Web Application, SP04-MT, assigned an Application Pool of the same name. The Root Site Collection is http://sp04-mt.nauplius.local, with two other Site Collections: http://sp04-mt2.nauplius.local and http://sp04-mt3.nauplius.local, all using the standard Team Site template. Here is the memory usage after allowing http://sp04-mt.nauplius.local to fully load. So we have a Private Working Set of 535,420 KB. Now, let’s load the second Site Collection, http://sp04-mt2.nauplius.local. A Private Working Set of 537,604 KB! A difference of a little over 2 MB. In addition, the Site Collection is spun up instantly, thanks to our binaries having already been JIT’ed. Now, the last Site Collection, http://sp04-mt3.nauplius.local. The Private Working Set is now at 538,484 KB. A difference of 880 KB from loading the second Site Collection, and just under a 3 MB difference from the original load of the Application Pool. From a performance and memory utilization perspective, leveraging Host-Named Site Collections should be the goal of SharePoint Administrators, where the solution can be applied. If Host-Named Site Collections for some reason cannot be used, instead use the same IIS Application Pool for all of the Web Applications in the farm, with the exception of Central Administration.

The post The Expense of Application Pools appeared first on Nauplius.

Viewing all 184 articles
Browse latest View live




Latest Images