Overview
With just a few lines of code, you can use the KV API to predict sufficient presence of signals consistent with a current depressive episode using a short clips of free-form speech from a patient. Our powerful machine-learning models can predict signs of depression from voice samples acquired across a wide range of workflows and settings including, but not limited to, clinical call centers, telehealth platforms, and remote patient monitoring venues. Here, you will find all the documentation and code samples necessary to integrate KV into your clinical workflow(s).
The KV API confirms to the REST architectural style. It has predictable resource-oriented URLs, accepts a range of request bodies, returns JSON-encoded responses, and uses standard HTTP verbs and response codes.
Kintsugi's APIs to get a prediction consists just of 3 main APIs. Before you start a prediction, you will need to initiate a session to inform that you have the patient's consent and to get a session identifier from Kintsugi. You can do this by invoking /initiate API call. The session identifier will help you to associate the audio sample and corresponding prediction result. Once you have a session id, you can then invoke /predict API to send the audio sample you'd like Kintsugi to analyze. While the analysis could take a few seconds, the call is non-blocking. The last API in the prediction flow is the /GET where you can get the prediction results using the session identifier.
Kintsugi offers a /feedback API which can be used to provide additional results that you may have to continuously improve our prediction models.
You will need to Initiate a Session with KV in order to proceed to the next steps of submitting an audio file with the patient’s voice sample to receive the prediction associated with that sample.