As LIDAR's pièce de résistance to date has been its ability to "see through" forest canopies to the terrain below, I don't understand how rain or snow should present an unsurmountable obstacle to seeing the road ahead.
Of course they do. Airborne lidar usually works in the near infrared spectrum which is not absorbed by wood. It is however more absorbed by leaves which have a higher water content. My main point is: airborne lidar (which is what OP is referring to when talking about removing forests and keeping the ground) has multiple returns for one laser beam. Laser beams can have a radius of 1-2 meters or more when they hit the ground. By keeping only the lowest return in the z direction you most certainly have a ground point. It's not rocket science and interpreting ground lidar scans (velodyne and such) in bad weather is a much much more difficult problem.
What would prevent using the same technique of only retaining the maximum Z direction return to tell the difference between a refracted rain drop and car bumper?
Perhaps it is that the rain drop refracts the beam to all sorts of other objects which can make it difficult to tell which actually has the longest Z-distance return?
If I understand correctly the airborne LIDAR works in the timedomain and is much more expensive, while the LIDAR on automobiles is much more affordable but works in the frequency domain...
And humans have the unique ability to instantly and reliably differentiate between visual noise and objects on the road they wish to avoid.
Lidar's record much less information than than 2 optical lenses (eyes). You got distance in 1, 2, or 2+1 dimensions, and intensity. Eyes can detect distance, wavelength, reflections, refraction, contrast, brightness etc. All with the assistance of the brain of course. If today's computers and software had the pattern matching capabilities of even a human child's brain (reasoning aside), a few moderate resolution video cameras would be all that is necessary for self driving.
But we're not there yet so we employ sensors with limited detection that is simpler to interpret programmatically.
I think even humans would have trouble interpreting the mess of lidar distance graphs in a rain/snow storm, though they'd still do better than current computers.