Is it possible to configure Windows to automatically close a shared-file read lock when a user with full permissions deletes it via the share? I know that there are ways to do this manually but I want it to happen automatically.
To add more details, I have a host machine running Windows Server 2012 R2. The host is running a Windows 10 VM in Hyper-V. The host is also sharing a folder with permissions such that "Everyone" has read-execute access, and one user on the VM has full access. When I connect to the share from a second machine (as an "Everyone" user), I can access the files (read-only) as expected, open them in the shell, notepad, etc. But if, while I have files open, I attempt to delete them from the share using the VM user, the system hangs as there is a read-lock on the file. If, however, I delete them from the host server, the read locks are usually (but not always) forcibly removed. Is there a way to get this behavior consistently when deleting them on the host or through the VM user, e.g. a "write-access-overrides-read-locks" share flag?
Edit:
What I really want is to be able to do is, from both the host server and VM, via both the shell and command prompt, rename and delete files and have the operation always succeed, even if other users have the files open for read via the share.
Edit2:
To clarify what I mean by automatic, I need the read-locks to be ignored even when using the standard commands (e.g. rmdir
, erase
, etc.) as well as the shell interface, and ideally even Win32 APIs (though the latter is not required). Using a script or other custom command, even if it uses the exact same syntax, won't work unless there's a way to hook the standard commands to use the custom ones. The reason is because I am running third-party batch files and executables that modify the shared files, and those scripts and programs fail out if they're not able to modify the target files. Modifying the third-party files to use custom lock-breaking commands is not an option.
I know that as an alternative to all of this I could run the third-party programs using an internal staging directory, then move over pieces of their output as they are finalized, but my goal is to have the output be made available on the share progressively, not just once each piece is finished.