In a tutorial made by Microsoft there is a code snippet similar to the following (edited the original to reduce distraction):
function Test {
param (
[Parameter(ValueFromPipeline)]
[string[]]$Params
)
process {
foreach ($Param in $Params) {
Write-Output $Param
}
}
}
In all previous examples however, the process
block itself was already used as a loop body. To my understanding, the following simplified code should be equivalent:
function Test {
param (
[Parameter(ValueFromPipeline)]
[string[]]$Params
)
process {
Write-Output $Params
}
}
Indeed, no matter what I pipe to it, the results are the same. However, the fact that it appeared in a first party tutorial makes me believe that there might be some actual reason for using the loop.
Is there any difference in using one pattern over the other? If yes, what is it? If no, which one is the preferred one?
Just in case my simplification is off, here is the original example:
function Test-MrErrorHandling {
[CmdletBinding()]
param (
[Parameter(Mandatory,
ValueFromPipeline,
ValueFromPipelineByPropertyName)]
[string[]]$ComputerName
)
PROCESS {
foreach ($Computer in $ComputerName) {
Test-WSMan -ComputerName $Computer
}
}
}
[string[]]$Params
is an array - that is, multiple items are passed at once. In the example you link to, it usesTest-WsMan
, where the-ComputerName
parameter takes one computer name at a time, so if you passed an entire array, it would fail, hence the loop.SomeCommandThatOutputsComputerNames | Test-MrErrorHandling
andTest-MrErrorHandling -ComputerName 'Computer1', 'Computer2'
. In the first one, theforeach
loop isn't necessary, because pipeline input gets unrolled automatically and passed one-by-one to theprocess
block. In the 2nd case there is no unrolling, so theprocess
block receives the whole array and you needforeach
to process the elements one-by-one.#4242
Consistently document a scalar -InputObject parameter as an implementation detail or make item-by-item processing cmdlets explicitly iterate over collection