$$f(x) = \sum_{m=0}^M \beta_m h(x, a_m)$$
where the $\beta_m$ are real and $a_m$ are a vectors of trained parameters. At each iteration, gradient boost adds on a new function $h$ using steepest descent.