Latest Entries »

As an IT consultant, sometime you run into situations that you need to compromise, Do I really need to buy another windows license to have a file server? I run linux, do I have to use windows in order to leverage Active Directory.
The answer to these questions is no, we don’t need to use windows in order to have a file server / software repository / any other service we can think of, and link to Active Directory.
By using Kerberos and LDAP we can have a single sign on environment with Active directory through linux servers.


Before I get into the big details on how to do this, I just want to mention that this configuration has no guarauntee, and is given  AS IS. You SHOULD BACKUP FIRST before modifying anything on your systems. And I can’t be held responsible if something happens to your system during the configuration.

My setup

This was done using CentOS 5.5 with a Windows server 2008 R2 Active Directory Schema

Making sure AD is ready

Before messing with linux, there is one prerequisite that needs to be installed with AD, and this requires a reboot. That is UNIX for active directory.
This is an extension to the active directory attributes by adding such things as the UID, GID, shell location, and home folder. In Windows Server 2008R2, in Server Manager, simply right click on the role “Active Directory Domain Services” and select “Add Role Services”.

Select Identity Management for UNIX and all its sub components.

Once the install is done, simply reboot the server

Once installation is finished, you should have a UNIX tab when going into a user or group’s properties:

There is one more configuration that we are going to need to do to test Single Sign On (SSO) with Linux, We need to configure a user with the UNIX properties in Active Directory. The NIS server already has configured itself to use the NETBIOS name of your domain. In my example, I modified administrator with a UID 10000, set my environment varialbe to /bin/bash and specified a basic home folder in linux along with a group.

(Note: It is good to use a User ID that is 10,000 and bigger, this will remove conflicts with other user account in Linux)

Also before we go to linux, its good that we create a user that linux can use to browse AD


So with AD ready to go, lets login to our Linux machine. Make sure that your DNS configuration is set on the linux server. Joining an AD server without DNS doesn’t really work great. Modify /etc/resolv.conf to have this:

nameserver x.x.x.x

nameserver x.x.x.x

There are 4 Configuration Files that need to be modified, krb5.conf, nsswitch.conf. ldap.conf and system-auth. Best thing to do right now, is setup a second connection to the linux box ( either by SSH, or by pressing CTRL + ALT + F2).

The reason to have a second connection is that we are tinkering with how linux accepts authentication, and a mis-configuration in certain files (namely system-auth) can cause issues for even root to logon.

Also, lets backup the configurations before we modify them. I like to create a nice /backupconfig folder at the root and copy all the configs there

mkdir /backupconfig

cp /etc/krb5.conf /backupconfig

cp /etc/nsswitch.conf /backupconfig

cp /etc/ldap.conf /backupconfig

cp /etc.pam.d/system-auth /backupconfig

Got that? Great!

First lets modify krb5.conf. In the following configuration, I’m configuring these files with the domain test.local, that has a Domain Controller called DC.test.local. So now with either vi or, my personal favorite “nano”

nano /etc/krb5.conf

default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

default_realm = TEST.LOCAL
dns_lookup_realm = true
dns_lookup_kdc = true

kdc = dc.test.local:88
admin_server = dc.test.local:749
default_domain = test.local

.test.local = TEST.LOCAL
test.local = TEST.LOCAL

profile = /var/kerberos/krb5kdc/kdc.conf

pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false

It is very VERY important that we respect Captialization, otherwise things just dont work.

So as you can tell, this is the configuration for kerberos. We are telling linux that TEST.LOCAL is our default domain, and in order to talk to TEST.LOCAL, use dc.test.local. We also said that they can use DNS to look up dc.test.local.

Also, if anyone uses test.local for authentication, please use TEST.LOCAL instead.

In appdefaults, we tell pam (the authentication module in linux) to not debug, don’t convert this information to Kerberos version 4, and you can forward the information. and the kerberos ticket is good for 36000 seconds (10 hours)

Once that is done, lets work our way to /etc/ldap.conf. What I would do is erase everything in ldap.conf, and rewrite a nice fresh clean config like so:

nano /etc/ldap.conf

base dc=test,dc=local
uri ldap://dc.test.local/
binddn unixjoin@test.local
bindpw password
scope sub
ssl no
nss_base_passwd dc=test,dc=local?sub
nss_base_shadow dc=test,dc=local?sub
nss_base_group dc=test,dc=local?sub?&(objectCategory=group)(gidnumber=*)
nss_map_objectclass posixAccount user
nss_map_objectclass shadowAccount user
nss_map_objectclass posixGroup group
nss_map_attribute gecos cn
nss_map_attribute homeDirectory unixHomeDirectory
nss_map_attribute uniqueMember member
tls_cacertdir /etc/openldap/cacerts
pam_password md5

What ldap.conf does is setting linux authentication parameters, like “passwd”, “shadow”, and “group” to a base within active directory using LDAP queries. Then it maps certain attributes from linux to Active directory, for example, the object “homeDirectory” in linux will be mapped to “unixHomeDirectory” in Active Directory.

Ok, half way done. Now we need to tell linux the methods that we are going to authenticate, much like the host.conf file says in which order hostname resolution will work.

nano /etc/nsswitch.conf
passwd: files ldap
shadow: files ldap
group: files ldap

#hosts: db files nisplus nis dns
hosts: files dns

# Example – obey only what nisplus tells us…
#services: nisplus [NOTFOUND=return] files
#networks: nisplus [NOTFOUND=return] files
#protocols: nisplus [NOTFOUND=return] files
#rpc: nisplus [NOTFOUND=return] files
#ethers: nisplus [NOTFOUND=return] files
#netmasks: nisplus [NOTFOUND=return] files

bootparams: nisplus [NOTFOUND=return] files

ethers: files
netmasks: files
networks: files
protocols: files
rpc: files
services: files

netgroup: files ldap

publickey: nisplus

automount: files ldap

aliases:    files nisplus

As you can see, this tells linux, try local files first for authentication, then try LDAP.

The final portion of AD authentication in linux is to modify system-auth. This file is part of the pam module. We touched a bit on it before, its the authentication service of linux, so if there is misconfiguration in this part, then, don’t say i didn’t warn you..

nano /etc/pam.d/system-auth

# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth required
auth sufficient nullok try_first_pass
auth requisite uid >= 500 quiet
auth sufficient use_first_pass
auth sufficient use_first_pass
auth required

account required broken_shadow
account sufficient uid < 500 quiet
account [default=bad success=ok user_unknown=ignore]
account [default=bad success=ok user_unknown=ignore]
account required

password requisite retry=3
password sufficient md5 shadow nullok try_first_pass use_authtok
password sufficient use_authtok
password sufficient use_authtok
password required

session optional revoke
session required
session [success=1 default=ignore] service in crond quiet use_uid
session required
session optional
session optional
session required

These modifications will tell pam what module to use and if they are optional, or required to login and create a session on this box.

Now the moment of truth, its time to test Single Sign On for Linux. The first command is kinit

kinit administrator

replace “user” with the user that you added UNIX attributes to. if successful, you should be asked a password, and then sent back to your prompt without any other messages. run klist to list the kerberos ticket. kdestroy to destroy the ticket. One last test is “getent passwd“. This command will query your local passed file, and then question AD for UNIX capable Users, for now, we should only see the user that you modified earlier.

The fastest way to test authentication is by establishing an SSH session from the same system:

ssh administrator@localhost

obviously use the same user as configured in active directory at the beginning. You should be able to logon without any hiccups, and.. it will create a home folder (if there wasn’t one already).

Congrats! You got it! But wait, why isn’t the Linux box listed in AD as a computer account. We can do this by continuing a bit more and adding the computer account from within linux.

To do this, make sure samba is installed.

yum install samba

Now, we need to modify some parameters in /etc/samba/smb.conf:

nano /etc/samba/smb.conf

workgroup = TEST

security = ads

passdb backend = tdbsam

realm = test.local

password server = dc.test.local

Now the last part is to run “net rpc join -U admin

Replace admin with an administrator account in your active directory environment.

Voila! Done! Now you have Single Sign On with AD on your linux box.

This is a new weekly post with a tip for Windows, OS X, Linux, iOS, Android… anything really that I hope could help others in their daily computing lives.

This week’s tip is a Windows tip. Ever wonder what permissions you have in your organization? What groups are you part of? What is my SID?

There is a nice command introduced back in the Windows XP days called whoami. First introduced as part of the support tools, and now part of the standard install of windows, this command can give you all the information about the currently logged on user.

If we just issue whoami in CMD, we will get this:

Nothing spectacular, but lets look at the flags to the command:
We can see there is a /ALL flag, lets see what happens when we run whoami /all
(Important SIDs are whited out)
We can see a whole bunch of information, like my username, my SID, domain group memberships and even my permissions.
So if you ever want a user to send you their information, you can make a batch script that has:
whoami /all > userinfo.txt
This will save this information into a text file that the user can send your way and you can see all their group information and make changes as necessary.

Enabling TRIM Support on OSX

Getting an SSD is probably the single most amazing thing that ever happened to my personal computing experience. It made my Mid-2010 MacBook Pro 13″ something from an OK experience, to a 1st class experience.

The system boots fast, Applications load faster, and I’m much more productive.

One thing that’s important in the SSD life is TRIM support. TRIM, in short, is the garbage collecting that is needed to clear deleted data on your SSD.

To go into more detail, When the operating system deletes data off of the hard drive, the SSD doesn’t actually clear the bits for that data, it just removes it from the allocation table. The TRIM command is sent from the Operating system to eventually clear those bits and make them ready to be written to again.

If TRIM wasn’t available from the operating system, eventually the SSD will be slow because the SSD would need to find free bits, and then clear them.

Wikipedia article on TRIM

Fortunately most modern Operating Systems do support TRIM, (Windows 7, and OSX 10.7). The problem with OSX is that its not enabled by default, unless it was an apple branded SSD.

I got an OCZ Vertex 2 120 GB SSD, and when I went to check for TRIM support after a re-install, TRIM was not supported (You can see this by going into About this Mac > More Info > System Report > Serial-ATA). You can enable this but it requires a reboot and some terminal work.

There is a good article from this website.

They point to a document but there was an issue with the way the quotes work, so Ill post the commands here, but just for the sake of completeness you can find the document here.

First backup the file in question:

sudo cp /System/Library/Extensions/IOAHCIFamily.kext/Contents/PlugIns/IOAHCIBlockStorage.kext/Contents/MacOS/IOAHCIBlockStorage /IOAHCIBlockStorage.original

Now time to use perl to modify the file:

sudo perl -pi -e ‘s|(\x52\x6F\x74\x61\x74\x69\x6F\x6E\x61\x6C\x00).{9}(\x00\x51)|$1\x00\x00\x00\x00\x00\x00\x00\x00\x00$2|sg’ /System/Library/Extensions/IOAHCIFamily.kext/Contents/PlugIns/IOAHCIBlockStorage.kext/Contents/MacOS/IOAHCIBlockStorage

Clear the kext caches:

sudo kextcache -system-prelinked-kernel

sudo kextcache -system-caches

Now reboot your Mac.

Now you should see TRIM support as Yes.

There are bugs that you can get around with, and then there are some that are just weird.. I came across one that involved a weird state for a local windows profile. Usually this will prompt you to simply solve the issue by just re-creating the profile. This usually solves the issue, but there is a faster way to resolve issues that involve with the profile with a backup status:

The image above doesn’t really show a backup status but, you get the picture.
The profile in question will load the home directory c:\users\TEMP. The user’s desktop won’t be the same, outlook won’t have the same profile, and the user’s favorites will be gone. Lets not panic, the folder in question is still there. The user is just not properly mapped to the right home directory.

First thing to do is to reboot the workstation in question. If you still have the same issue, we will need to modify the registry.

NOTE: Modifying the registry is risky, and even if you follow the instructions word for word, I can’t guarantee success or a corrupted windows or loss of data. Please proceed at your own risk.

This remedy is taken from this Microsoft KB article, but ill mention it here for completeness and add my thoughts to each task.

Go to Start and run REGEDIT

Go to:

  • HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList
In this key, you should see something similar to this:

The S-1-5-21 keys actually are the configuration of the profiles in windows. one thing to notice is that there are two that are strikingly similar (S-1-5-21-1079119….) but with one difference, the .bak at the end of the key at the bottom.

Lets take a look inside the key:

A healthy profile should look like this:

A cool thing to note is that the ProfileImagePath points to the home directory of the user. An unhealthy profile will display the ProfileImagePath to c:\users\temp, and the RefCount will have a value higher than 0.

To solve the problem, login to an administrator account other than the one that has the issue.

Next modify the key name that doesn’t have the .bak to .ba.

Now Rename the key that HAS the .bak and remove the .bak.

Finally modify the key name that HAS the .ba to .bak

Once that is done, you will need to modify a few more things in the key without the .bak.

We need to change the RefCount to 0

We need to clear the state in State to 0

Now its time to reboot and try to login.

This worked for me. What your more or less doing is manually changing the state of the profile from the backup state to a local state. This is something similar to when a windows server cannot remove the restart pending status.

Another weird bug cam across my desk this week, this time, dealing with VMware Fusion and Mapping USB devices to a VM.

The problem is that when you connect a USB device to via the menu to the VM in question, VMware fusion never actually does the attachment to the VM, thus scratching your head say.. is it the Windows the problem? is it the port? what is it?

Turns out that this problem can be caused by permissions! Go into Terminal on the Mac and type:

ls -ld /

this should present the permissions:

drwxr-xr-x  33 root  wheel  1190 14 Dec 09:17 /

If its anything else, then we are going to need to fix the permissions.

FIrst place to go is in the disk utility, select your disk, and then click “Verify Permissions” and  “Repair Permissions”

If this doesn’t solve the problem, you can always go Linux and type the following commands in the Terminal window:

sudo -s

chown root:wheel /

chmod 755 /

This will require a reboot of the Mac, and now you should have the right permissions to map the USB devices!

Original KB Article:

IMCEAMAILTO errors in Exchange

Ever get this weird error?>

#550 5.4.4 ROUTING.NoConnectorForAddressType; unable to route for address type ##

I encountered this weird error today. What is that weird e-mail address? IMCEAMAILTO ?? This is what outlook interprets when a user clicks a hyperlink for a ”” within an email in outlook to send a new message.

For some reason, outlook does’t actually parse the MAILTO: but instead adds the MAILTO: as part of the email address that you are sending to.

The problem doesn;t stop there unfortunately, because outlook wants to remember every address you ever wrote to )the suggested contacts list), it will actually save the email address for that specific contact, which, will present itself as a contact called “” with the email address “”.

To solve this problem permanently for that specific address, we need to dig a bit more and change the email address stype back to SMTP:

With a lot of modern routers (ISA, Watchguard, etc..) we can put some proxy actions for publishing services. What’s the advantage? We can monitor the entire conversation between the client and our web server. Just like client outbound proxies, however, there can be some mishaps.

One good example is how a WebDav server can behave under a http proxy, you may get mixed results. OWA (Outlook Web Access) is a WebDav server for IE clients, and sometimes you may get errors like not being able to see your inbox, but you can see your folders just fine, or Active Sync just not working at all.

First lets look at the OWA error. In my example, I’m using a Watchgaurd XTM firewall with a HTTPS proxy to publish OWA. With the Proxy’s default values, we can log into OWA, but showing anything in the inbox keeps a “loading…” message. In order to make the inbox come up, we need to add a simple checkbox:

Remote Desktop Connection

Which bypasses proxy actions to allow WEBdav.

Next, lets look at ActiveSync. ActiveSync will just not work with watch guard’s default HTTPS proxy. The best way to diagnose it to try to go to the ActiveSync web page:

Watchguard http proxy  Google Images

With this, we need to allow the “Option” method in the HTTP protocol:


I recently got myself a Mac Mini (mid-2011) to act as a Media Center, and as a server for my home environment. I will admit, things were not as smooth as I anticipated..

Apart from not having control of DHCP and DNS from the default (not that I’m bitter), having to download the remote server admin tools to control open directory.. The Time Machine server function never “just worked” for me.

On the, setup is plainly simple. Choose your Disk, and turn it on:


So the setup is practically seamless. How does another Mac backup to the time machine server? The server uses bonjour to broadcast the backup service. What’s presented to your Mac is a share on the server called “Backups”:

System Preferences

What SHOULD happen is backups over Wifi, pretty cool! One problem, troubleshooting this thing is not user friendly AT ALL, as in my case:

All Messages

What does “NAConnectToServerSync failed with error: 80” mean?

Of course, Lion is brand spanking new, so googling for help was useless (especially for lion server), Turns out, my password that I was using was the culprit.

In my password, I had a special character “$”. This messes with the mount_AFP command that is issued to backup. The Solution? Create a Backup user without special characters for its password.

Now, with this considered, I find this HORRIBLE! How, in this day and age, not allow special characters for passwords in order for stuff to work? It’s beyond me. A lot of my server experience has been a big mess. In windows when I DCPROMO a server, it installs DNS, why is DNS and DHCP so buried in the settings.. I don’t get it..

Hopefully Apple can get on this and put the same Quality Control it does like its consumer products.. Hell, 50$ Server License for all your Macs, you can pretty much call it a consumer product.

Remember the good old days when you wanted to export an e-mail account out of exchange for archiving, or just general backup purpose? We admins needed to install EXMERGE!
Exmerge was, and still is, a blessing to admin’s everywhere, it was a powerful tool that gave you more control of exporting or importing mailboxes in exchange, packaging everything up in a nice .PST file so you can re-import, or open it up with outlook. Let’s face facts though, by today’s standards, its not the most elegant/modern solution going. I was happy to see that Microsoft added this functionality in exchange 2010 through Powershell, and no Outlook required!

First off, we need to add your AD account as part of the mailbox import export role, lets fire up the Exchange Management Shell and type up:

New-ManagementRoleAssignment –Role “Mailbox Import Export” –User domain\AdministratorAccount

Before we start exporting and importing, there is one small snag, we need to use network shares for output and input of pst files. Of course, it can be a share within the exchange server itself. (Make sure you have full read and write permissions on the share!)

So lets start with Exporting.

When your importing or exporting, you issue a request, think of it as moving a mailbox in the Exchange Management Console. The request holds the status of the job, even when the job fails or completes.

To start an export request:

New-MailboxExportRequest -Mailbox user -FilePath “\\server\share\user.pst”

This will issue an export request.. now what? We can list the export request by issuing:


There is a more detailed output:


this is good, but now i want the full details of the request I just made:

get-mailboxexportrequeststatistics -identity user\mailboxexport | fl

If we want to create a mailbox import request, its the same commands, but just change “export” to “import”

New-MailboximportRequest -Mailbox user -FilePath “\\server\share\user.pst”



get-mailboximportrequeststatistics -identity user\mailboximport | fl

Sometimes you need to publish a bunch of web servers, but don’t have enough public ip addresses to publish them with.

Usually virtual hosts come to the rescue, but what if you have multiple instances of Apache, or just multiple web servers?

There is a way to redirect these requests by using only 1 public IP, and best yet, its completely free! (IN money, not time!)


What you will need:
A distro of linux (I like CentOS)
An available machine / be able to create a virtual machine

After installing your Base OS, your going to need to do some “wget” to get the source files to install.

First create a folder:

mkdir /installer
cd /installer

Now its time to get the latest source package of HAProxy:


Now issue a:

make install

Lets copy haproxy to the sbin folder:

cp haproxy /usr/sbin/haproxy

Now lets go to the etc folder:

cd /etc

and make a new file called “haproxy.cfg” and enter this in the file:

nano haproxy.cfg

maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 4 # Number of processing cores. Dual Dual-core Opteron is 4 cores for example.
mode http
clitimeout 60000
 srvtimeout 30000
contimeout 4000
option httpclose # Disable Keepalive

frontend http-in
bind *:80
acl is_server1 hdr_end(host) -i
acl is_server2 hdr_end(host) -i

use_backend server1 if is_server1
use_backend server2 if is_server2

backend server1
balance roundrobin
cookie SERVERID insert nocache indirect
option httpchk HEAD /check.txt HTTP/1.0
option httpclose
option forwardfor
server Local 192.168.1.x:80 cookie Local
backend server2
balance roundrobin
cookie SERVERID insert nocache indirect
option httpchk HEAD /check.txt HTTP/1.0
option httpclose
option forwardfor
server Local 192.168.1.x:8080 cookie Local

A little bit about this config a little later.

Lets finish the install, lets get the launcher:

wget -O /etc/init.d/haproxy

Now finish the startup setup:

chmod +x /etc/init.d/haproxy
chkconfig –add haproxy
chkconfig haproxy on

Now you can start and stop the service by running:

service haproxy stop
service haproxy start

So what about the config file? lets focus on a few section of importance:

The first section is the ACL section:

frontend http-in
bind *:80
acl is_server1 hdr_end(host) -i
acl is_server2 hdr_end(host) -i

use_backend server1 if is_server1
use_backend server2 if is_server2

this is saying “Im creating this rule called ‘is_server1’ and in this rule, i want you to check the header information (hdr_end(host)) and see if it matches with” This same mentality is applied to

The second part is stating “redirect to backend server ‘server1’ if the rule ‘is_server1’ is true”

So far, so good, now lets take a look at the “backend” section of “server1”:

backend server1
balance roundrobin
cookie SERVERID insert nocache indirect
option httpchk HEAD /check.txt HTTP/1.0
option httpclose
option forwardfor
server Local 192.168.1.x:80 cookie Local

In brief, what this is stating is “this is the configuration for server1, if you are accessing this section, please redirect to server 192.168.1.x:80”

So to add or remove servers in your configuration, all you need to do is add to these two sections the new configuration, and your all set.