File share recommendations for many CAD files

Posted on

File share recommendations for many CAD files – Managing your servers can streamline the performance of your team by allowing them to complete complex tasks faster. Plus, it can enable them to detect problems early on before they get out of hand and compromise your business. As a result, the risk of experiencing operational setbacks is drastically lower.

But the only way to make the most of your server management is to perform it correctly. And to help you do so, this article will share nine tips on improving your server management and fix some problem about windows, file-sharing, , , .

Description of problem / environment

We currently have around 100 GB of CAD files (90k files, 6k directories) stored in a couple of Subversion repositories. It seems an unnecessary hassle / burden to keep this much binary data in Subversion. It’s also a burden for people to check in new files as they need to add & checkout a directory before they can commit. The only “advantage”, being able to just right click and “update”, has the penalty of 2 copies of each file being stored (how svn works), and being very slow. There is no meaningful version history to the files – i.e., the CAD files are not modified further one they are added, or if so, in this particular case it is not data we care about – only the current, latest state, or HEAD…thus exporting the data out of SVN is straightforward. Editing the files is not really part of workflow and is more likely to be accidental, and it involves 5+ CAD systems so I’m not sure a “PLM” type system would really be ideal or warranted.

The current environment of the file server is Windows Server 2003 – that will likely change in 6 months time (either to server 2008 R2 + big RAID 6, or a NAS, probably server 2008 R2 involved either way)

Due to the sheer size, no one really checks out all the parts (or even a given directory) very often and there is already a read-only network share that updates itself once a day from Subversion. The auto-update process breaks all the time (svn working copy gets dirty or in a bad state and needs to be cleaned). That is how the majority of users access these parts so they are fairly used to accessing from share already, it’s primarily a change to how parts get added.

What new workflow options are there? Am I missing anything?

I’d like to update our workflow for dealing with CAD files. The current on the table consideration is going to a straight windows network share. Ideally maintaining this read-only behavior, but obviously people need a place to dump new files and have them be added to the share. If the network share becomes the primary source of the data it will be important that people aren’t opening, editing, and saving the files all the time. I suppose the importance of that is debatable, but generally if editing, the contract is they copy them to their PC so the “main” copy of a given file isn’t modified for everyone else.

Is it not worth the hassle trying to separate adding files from accessing them? (to maintain read-only access of the share)

Setting the share to write but not modify isn’t necessarily an option (if maintaining read-only is a core requirement), as CAD systems like Pro/ENGINEER Take CAD file XYZ.prt, and each save increment a number… Eg. XYZ.prt.1, XYZ.prt.2, etc, which will result in many copies if people are accidentally saving to the share.

So far I have a hazy idea that I can script something to handle a writable “drop box” that copies to the share, and for example…denies zip files, and refuses to overwrite any files. This leaves me with the manual duty of deleting any files (occasionally necessary, but rare – or could be given to a select group of users). Maybe despite its imperfection Subversion isn’t terrible…I’m looking for some other opinions here. What I don’t want is to change everyone’s workflow only to make the situation more work for me or the users (30-40 users).

Solution :

I should wait for your response to my comments, but…

How about:

  1. A read-only share for your files.

  2. A writable share with the same directory structure.

  3. When someone needs to update a file, they “check it out” from the read-only share – that is, they make a copy of it locally. (The limitation of what I’m about to say next is that there isn’t a check-out process…)

  4. They work on the file locally, any temp files are only on their hard drive.

  5. When they’re done updating the file, they copy it back to the correct directory on the writable share.

  6. Using one of the many file copy tools (rsync, SecondCopy, whatever), at whatever interval you want, files are copied from the writable directories to the corresponding read-only directory. The new version of the file will over-write the previous one, or you could keep versions at that point if you want to.

As I said, there’s no actual check-out in this system, it doesn’t deal with two or more people working on the same file at the same time. I guess the collision resolution could make use of the fact that people would have a local copy of their work (at least for a while) to fall back on.

I fail to understand why you are using a version control system when you are clearly not using it as such. Right now you are not getting the benefits of version control, yet are still paying for it in resources used.

Given your description I suggest using conventional file/folder shares instead. Why make it any more complicated than it needs to be? Structure is most easily established using clear and sensible folder naming and layering. I believe your users will not only adapt to it quickly but will thank you for the change.

Leave a Reply

Your email address will not be published.