Metacognition refers to the ability to reflect on and monitor one's cognitive processes, such as perception, memory and decision-making. Metacognition is often assessed in the lab by whether an observer's confidence ratings are predictive of objective success, but simple correlations between performance and confidence are susceptible to undesirable influences such as response biases. Recently an alternative approach to measuring metacognition has been developed (Maniscalco & Lau, 2012) that characterises metacognitive sensitivity (meta-d') by assuming a generative model of confidence within the framework of signal detection theory. However, current estimation routines require an abundance of confidence rating data to recover robust parameters, and only provide point estimates of meta-d'. In contrast, hierarchical Bayesian estimation methods provide opportunities to enhance statistical power, incorporate uncertainty in group-level parameter estimates and avoid edge-correction confounds. Here I introduce such a method for estimating meta-d' from confidence ratings and demonstrate its application for assessing group differences. A tutorial is provided on both the meta-d' model and the preparation of behavioural data for model fitting. A hierarchical approach may be particularly useful in situations where limited data are available, such as when quantifying metacognition in patient populations. In addition, the model may be flexibly expanded to estimate parameters encoding other influences on metacognitive efficiency. MATLAB software and documentation for implementing hierarchical meta-d' estimation (HMeta-d) can be downloaded at https://github.com/smfleming/HMM.