If I have, for example, a white light, consisting of many wavelengths of the whole visible spectrum (from 400nm to 700nm) focusing on a surface, and the surface reflects only the frequency near to 600nm, so I will see the object as yellow (600nm).
If I have, for example, a pure yellow light, with a single wavelength of 600nm and it focuses on this same surface, I will still see the surface as yellow.
Until this part, that is ok.
But now, if I have a white light, but this time it is not consisted by the whole visible spectrum, but only by red (700nm), green (550nm) and blue (400nm) wavelengths. Our brain still understand this as being white light. If this light is reflected on the same yellow surface, it will reflect only the frequencies near to 600nm, but this time there is no frequency near to 600nm. So there is no reflected light, and I will see the surface as black? The first white light will show me a yellow surface, and the second white light will show me a black surface?
EDIT:
Thank you both for the clarification. So that is what I imagined. The main reason for this question is that some time ago I was working in a 3D game for fun, and I made a simple shader from tutorials I saw. The way I calculated light reflection color was simply making a element-wise multiplication between two vectors, they represented the light color and the surface reflection color. So, for example, if I had a yellow light (in RGB, with a range of 0 to 1, it would be a vector yv=(1,1,0) ), and a surface with a green color gv=(0,1,0), the reflected light color would be green too, as (1,1,0)*(0,1,0)=(0,1,0), this supposing that the surface normal vector and the light ray vector are parallel to each other.
But in the real world, that yellow light represented by yv, would be a light with two wavelengths (red and green). But if it was a pure yellow light, with a narrow band (like a sodium lamp), the reflected light on the green surface would be almost null, and the surface would appear as black instead of green.