For quasi-Newton methods for unconstrained optimization, it is valuable to develop methods that are robust, i.e., methods that converge on a large number of problems. Trust-region algorithms are often regarded as being more robust than line-search methods, however, because trust-region methods are computationally more expensive, the most popular quasi-Newton implementations use line-search methods. To fill this gap, we develop a trust-region method that updates an LDLT factorization, scales quadratically with the size of the problem, and is competitive with a conventional line-search method.