stopModelInvocationJob

Stops a batch inference job. You're only charged for tokens that were already processed. For more information, see Stop a batch inference job.