[wpkg-users] Large deployments and fileshares [SEC=UNOFFICIAL]
natxo.asenjo at gmail.com
Fri Sep 3 09:03:56 CEST 2010
On Fri, Sep 3, 2010 at 7:18 AM, Michael Chinn
<Michael.Chinn at gbrmpa.gov.au> wrote:
> Just had an unfortunate experience with deploying and acrobat reader update
> from 8 to 9.3.4
> It’s a simple administrative install point on a samba server with a custom
> All went well when people logged on until 0850 when the server suddenly died
> under the load of machines trying to copy 109mb of data.
> Does anyone know of an effective way to mitigate this.
It was not probably the copying that caused the trouble (you write
later that using robocopy was a workaround), but the unpacking of the
package on the share again and again and again.
How many clients? How many smbd processes were running? What was the
load on the server? On what hardware do you run this samba server? It
may simply not be up to a lot op disk i/o. I've seen this on virtual
machines whose hard disks are on a NFS share. For optimal disk
performance you need luns in a san (so direct hardware access to the
disks from the virtual machines - very expensive) or a (very
expensive) netapp nfs filer with 10Gb network switches. :-)
> I ended up changing the script to robocopy the admin install point to the
> local machine before deploying from there as a work around.
> Does WPKG have any ability to do failover if the main deployment files
> aren't available or should I script for this?
wpkg cannot do this to my knowledge, but the underlying OS can.
Cluster the samba servers with GFS. If one of the nodes goes down, the
other keeps going. It's not different from a windows cluster. Both are
complex solutions for high availabilty that require trained staff.
More information about the wpkg-users