Linting tools have a lot of rulkes you can apply to make sure the code that is written and simulated, has met basic requirements for DFT. Hence all flops reset, controls for bypass of reset and clocks for testclock. These lint checks improve the quality of the design going to the synthesis guys (and DFT). That saves a lot of back and forth, design changes that need to be regressed again. For basic DFT, scan patterns, you need scan chains. In RTL, your flops are in a clocked process (and reset of course), hence the functional path from data out of the previous flop to the data input of the next one is described. The synthesis tool translates this to the tech library flops available. This flop is the real thing, and can be found in the netlist. It has clock, reset and data inputs and data output, simply put. But it also has test related pins. The scan chain in and out connections. You need the synthesis translation step (with optimizations, merged logic, …) to get the exact amount of flops. The netlist that is a result of the synthesis step contains the flops but the test pins are not connected. The scan insertion tool will stitch the flops together. The DFT engineer can change the number of flops per chain, limit the number of chains or even put two chains in different clock domains (and these could be in a totally different module in the RTL hierarchy) together. Conclusion: linting can help to make sure some essential rules are met in RTL but the real test implementation, like the chain stitching needs to be done in the netlist, it does not make sense to do this in RTL.
Why we are not applying DFT (Design for testability) at time of RTL Design in the ASIC design flow?