## Abstract

Singular covariance matrices are frequently encountered in both machine learning and optimization problems, most commonly due to high dimensionality of data and insufficient sample sizes. Among many methods of regularization, here we focus on a relatively recent random matrix theoretic approach, the idea of which is to create well-conditioned approximations of a singular covariance matrix and its inverse by taking the expectation of its random projections. We are interested in the error of a Monte Carlo implementation of this approach, which allows subsequent parallel processing in low dimensions in practice. We find that

are sufficient for the Monte Carlo error to become negligible, in the sense of expected spectral norm difference, for both covariance and inverse covariance approximation, in the latter case under mild assumptions.

*O*(*d*) random projections, where*d*is the size of the original matrix,are sufficient for the Monte Carlo error to become negligible, in the sense of expected spectral norm difference, for both covariance and inverse covariance approximation, in the latter case under mild assumptions.

Original language | English |
---|---|

Pages (from-to) | 929-950 |

Number of pages | 22 |

Journal | Analysis and Applications |

Volume | 18 |

Issue number | 5 |

DOIs | |

Publication status | Published - 17 Jul 2020 |

## Keywords

- Singular covariance
- precision matrix
- curse of dimensionality
- random projections;
- Monte Carlo error