Look at the standard integral notation

∫_{a}^{b}(some expression f of *running *variable x)dx ……….(1)

This usually but NOT always means (f of x) multiplying dx, then integrating from a to b, even though in some contexts it looks exactly like that and you feel “no doubt it means exactly that”.

In better computer graphic, it’s written as

where f(x) is our complicated expression of x .

Instead, this Leibniz notation is based on the algebraic summation of n items, each a product of (f * Δ),

- …………..(2)

What is expressed in the integral in (1) is that we *integrate over infinitesimal divisions*. These divisions divide up and cover the entire continuous range [a,b]. Therefore (1) denotes the sum in (2) *as **Δ goes infinitesimal*. Indeed, most of the time we can treat the __(…)dx__ as a product as in (2), but i just feel there are some special contexts /__with invisible broken glass on the floor__/. I can’t identify exactly those contexts, but here are some hints.

People often put funny things after the /innocent-looking/ “d”. like

(…expression of x) d(-x/2)

(…expression of x) dw(x)

(…expression of x)dy dx/dy

dw(x) means dw, where w is treated as a variable, even though we know that w is a function of x.

I don’t know if teachers ever do these things, but students do. They muddy the water.

How about double integral

∫_{c}^{d}(∫_{a}^{b}(f of x and y)dx)dy

Whenever it’s confusing I feel we had better refer to the the original definition of Leibniz’s notation, based on (2)