The purpose of any HDL, hardware description language, is to be independent from an EDA vendor (tools) and target tech (ASIC, FPGA). In principle, using inference, the code translates to FPGA or ASIC technology cells. An example of inference is the “*” operator. The EDA tools will implement a multiplier according to the cells available (basic building blocks and resources in ASIC or FPGA). Essential is the tool chooses. Instantiating is not inference, it directly puts a tech-specific module in the RTL code. This is no longer independent of tech. Usually, wrappers around tech specific blocks can ease the swapping of tech-specific cells or modules. This means, in theory, you can use the same code for FPGA and ASIC if all is inferred. If instantiated, you have to swap module content.
All this is generally speaking. Specifically, FPGA architecture is different for clocks and resets for one thing. An FPGA will be the fastest if data flipflops will be without a reset, this makes the highest clock frequency possible. If you need a reset, for example, to have a safe FSM, the reset is synchronous. In ASIC, the reset needs to be there for every flipflop because of design for test reasons. The reset enters asynchronous and needs no clock to become active. It exits synchronous. Hence there are particular things to consider that make FPGA code more difficult to migrate to ASIC, while the prototyping of an ASIC in FPGA is widespread. Some FPGA vendors have an FPGA to ASIC solution which is a mixed solution between a fully programmable and hard implemented ASIC.
Anyway, FPGA designs are more error-prone. The RTL code simulations are not so thorough as ASIC simulations. An ASIC is so expensive that the verification must reduce the risk for a bug to the absolute minimum. While an FPGA can be reprogrammed hence a bug or fix is much easier to do (doesn’t mean it is easy to roll out in the field). ASIC is first time right oriented while FPGA is more relaxed (or perceived that way). An FPGA allows design changes.