Initially, I like to have NPM discover EVERYTHING. When I'm trying to wrap my brain around a new infrastructure, I need to know what's out there. Servers, switches, storage, hypervisors, even desktops and printers. This way I get a sense of scale.
Desktops are the first to go, though. I don't want to know how many times the visitor center PC reboots in a day. Printers usually go, too. I'll let them send traps if they run into trouble, but I certainly don't need to monitor physical memory utilization of a printer on a constant basis. In the end, it's routers, switches, servers, and storage that I care about.
On monitoring-spam: I agree with earlier posts that tuning your thresholds is a great way to reduce duplicate alerts from any NMS. A 5 second spike in CPU shouldn't trip any alarms. But 95% cpu utilization for 15 minutes might warrant some diagnostics. In these cases, I let NPM run with the defaults for a month or two to establish a baseline for performance, then go back and start tweaking alert thresholds.