Monitoring biosignals is crucial for monitoring well-being and preventing the development of serious medical conditions. Today, wearable devices can conveniently record various biosignals, creating the opportunity to monitor health status without interrupting daily routine. Despite the widespread use of wearable devices and existing digital biomarkers, the absence of curated data with annotated medical labels hampers the development of new biomarkers to measure common health conditions. In fact, medical data sets are often small compared to other domains, which poses an obstacle to the development of neural network models for biosignals. To address this challenge, we have employed self-supervised learning using unlabeled sensor data collected under informed consent from the large longitudinal Apple Heart and Movement Study (AHMS) to train core models for two common biosignals: photoplethysmography (PPG) and electrocardiogram ( ECG). ) recorded on Apple Watch. We selected AHMS PPG and ECG data sets that include data from ~141,000 participants spanning ~3 years. Our self-supervised learning framework includes participant-level positive pair selection, stochastic boosting module, and a regularized contrastive loss optimized with boost training, and generalizes well to PPG and ECG modalities. We show that pre-trained core models easily encode information about participants' demographic and health conditions. To the best of our knowledge, this is the first study to build baseline models using large-scale PPG and ECG data collected via consumer wearable devices; Previous work has commonly used smaller data sets collected in clinical and experimental settings. We believe that basic PPG and ECG models can improve future wearable devices by reducing reliance on labeled data and have the potential to help users improve their health.