One of the common questions/situations folks face in an ldap implementation is implementing some sort of locking mechanism for old accounts. We use the employeeType attribute in the inetOrgPerson schema. You could probably use any similar attribute (or even a custom one). One way to implement this locking is to add checks in your application code to remove such accounts. This strategy is bound to leak stuff (either due to application problems, or coding errors), sometimes this data may simply have multiple access points - through a public address book, or through folks directly accessing the directory data. A better way (imo) is to filter these accounts within the LDAP server itself. OpenLDAP allows you to set acls on specific filters. This works beautifully for cases like this. As an example, we have this acl in our slapd.conf file.
access to dn.children="ou=People,dc=mozilla" filter=(!(|(employeeType=Contractor)(employeeType=Employee)))
by group="cn=admins,ou=ldapgroups,dc=mozilla" write
by * none
This blocks out accounts with any other employeeType (other than employee or contractor) from everyone, except the administrators. Of course, this depends on you setting the employeeType attribute to some appropriate value (like retired) on inactive accounts.
Thursday, April 23, 2009
Tuesday, April 21, 2009
iSCSI - fast, cheap and reliable?
iSCSI is often touted as the new way forward for enterprise storage needs. This is mostly a rant/information about what I have found useful so far.
To begin with iSCSI support in linux is pretty good and you will find a lot of resources on how to set it up etc. Here are the quick steps.
That should present an iSCSI block device to your system. Check your dmesg for the device name. Unfortunately, I was unable to find a simple iSCSI command that lists the mapped device name and the corresponding iSCSI lun information. In the old days (with the cisco? iSCSI tools), there used to be a nice little iscsi-ls command that would list all your iSCSI devices and where they came from etc. You now have to manually map /proc/scsi/scsi entries with the /sys/class/scsi_disk/lun-number/ and dig for the entries that way.
I have found that iSCSI in general is not all that fast. You can get decent performance out of it, but nothing close to dedicated local disk and a good hardware raid solution. I have seen asymmetric read-write performance on iscsi disks - with writes being a lot faster than reads. Also, don't waste your money on hardware iSCSI initiators. The don't help a whole lot with speed and are a pita to maintain (with out-of-kernel drivers in some cases). Most modern servers have plenty of horsepower to drive iscsi using the CPUs. I have heard that running iscsi on a 10Gbps network helps (we run it on a 1 Gbps network) with the performance.
iSCSI is in general cheaper than comparable full SAN solutions, but, performance does suffer. This is okay for most cases, except maybe for i/o heavy db applications etc. Choosing the right vendor for your NAS device matters a lot. Don't believe the specs you get from the vendors, try a few of them, perform some tests (bonnie, plain old dd, iozone) and decide how much you want to sink into it.
HTH someone out in the ether!
To begin with iSCSI support in linux is pretty good and you will find a lot of resources on how to set it up etc. Here are the quick steps.
- yum install iscsi-initiator-utils(or whatever tool you use to install stuff)
- chkconfig iscsid on; chkconfig iscsi on (or update-rc)
- echo "iscsi initiator name" > /etc/iscsi/initiatorname.iscsi (but put the real name in, make one up thats unique to this host). Make sure that the iscsi volume you want to mount now allows this initiator name to connect to it.
- service iscsid start
- iscsiadm -m discovery -t sendtargets -p IPofIScsiHost
- service iscsi start (now you should see a device sdx appear in dmesg)
- when you put it in /etc/fstab, you need "_netdev" in the options so that it doesn't try to mount it before the network is available.
That should present an iSCSI block device to your system. Check your dmesg for the device name. Unfortunately, I was unable to find a simple iSCSI command that lists the mapped device name and the corresponding iSCSI lun information. In the old days (with the cisco? iSCSI tools), there used to be a nice little iscsi-ls command that would list all your iSCSI devices and where they came from etc. You now have to manually map /proc/scsi/scsi entries with the /sys/class/scsi_disk/lun-number/ and dig for the entries that way.
I have found that iSCSI in general is not all that fast. You can get decent performance out of it, but nothing close to dedicated local disk and a good hardware raid solution. I have seen asymmetric read-write performance on iscsi disks - with writes being a lot faster than reads. Also, don't waste your money on hardware iSCSI initiators. The don't help a whole lot with speed and are a pita to maintain (with out-of-kernel drivers in some cases). Most modern servers have plenty of horsepower to drive iscsi using the CPUs. I have heard that running iscsi on a 10Gbps network helps (we run it on a 1 Gbps network) with the performance.
iSCSI is in general cheaper than comparable full SAN solutions, but, performance does suffer. This is okay for most cases, except maybe for i/o heavy db applications etc. Choosing the right vendor for your NAS device matters a lot. Don't believe the specs you get from the vendors, try a few of them, perform some tests (bonnie, plain old dd, iozone) and decide how much you want to sink into it.
HTH someone out in the ether!
Subscribe to:
Posts (Atom)