Basics:

Let's assume we've just spun up a brand new cluster. The first thing we do is take a look at what storage classes we have available:

kubectl get sc
NAME                PROVISIONER                AGE
azurefile           kubernetes.io/azure-file   1d
default (default)   kubernetes.io/azure-disk   1d
managed-premium     kubernetes.io/azure-disk   1d

Now let's list any persistent volumes. Given this is a new cluster there probably won't be any:

kubectl get pv
No resources found.

To spin up a pod with some persistent storage, first we need to create a PersistentVolumeClaim:

cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: web-content
  namespace: web
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
EOF

Let's check what Persistent Volumes and Persistent Volume Claims we have after running this:

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                             STORAGECLASS   REASON    AGE
pvc-9f39f84a-c447-11e8-a296-ee934e0ae766   3Gi        RWO            Delete           Bound     web/web-content           default                  2s


kubectl get pvc
NAME                  STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
web-content  Bound     pvc-9f39f84a-c447-11e8-a296-ee934e0ae766   3Gi        RWO            default        1m

Let's analyse what's happened here...
We've created a PVC (i.e. a claim for a persistent volume), and we can see that listed when we run kubectl get pvc. However, a PV (the underlying Persistent Volume) has also been provisioned. How did this happen?
The best explanation I found was in the official k8s docs here: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Here's a summary
Static
A cluster administrator creates a number of PVs. They carry the details of the real storage which is available for use by cluster users. They exist in the Kubernetes API and are available for consumption.

Dynamic
When none of the static PVs the administrator created matches a user’s PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a storage class and the administrator must have created and configured that class in order for dynamic provisioning to occur. Claims that request the class "" effectively disable dynamic provisioning for themselves.

So, how does a PVC know which PV to use in the absence of dynamic provisioning?
See below for an explanation (also from the docs)

Selector
Claims can specify a label selector to further filter the set of volumes. Only the volumes whose labels match the selector can be bound to the claim. The selector can consist of two fields:

matchLabels - the volume must have a label with this value
matchExpressions - a list of requirements made by specifying key, list of values, and operator that relates the key and values. Valid operators include In, NotIn, Exists, and DoesNotExist.
All of the requirements, from both matchLabels and matchExpressions are ANDed together – they must all be satisfied in order to match.

Now let's create a deployment to attach this claim to a volume and mount this volume inside a container. we'll also create a service to expose this deployment:

cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: web
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      volumes:
      - name: web-content
        persistentVolumeClaim:
          claimName: web-content
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
        volumeMounts:
        - name: web-content
          mountPath: /usr/share/nginx/html
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
EOF 

When the pod is ready, exec into the container and create some content we want to persist in /usr/share/nginx/html.

kubectl exec nginx-68b97d6768-5h67q -it -- bash
cd /usr/share/nginx/html
echo "This is a persistence test on $(date)" > index.html

Now destroy the pod and start a new one using the same yaml as above. Once it's running, exec into it and you'll see the data you wrote from the first pod has been persisted.

kubectl exec nginx-91g11f6394-3g21h -it -- bash
cat /usr/share/nginx/html/index.html
This is a persistence test on Sun Sep 30 15:39:53 UTC 2018

Failing over to another node:

Let's cordon and drain a node to see if our data is persisted to the moved pod. First, check which node the pod has been scheduled on, then cordon and drain that node:

kubectl get pods  -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP          NODE                       NOMINATED NODE
nginx-68b97d6768-5h67q   1/1       Running   0          3m        10.0.1.38   aks-nodepool1-27870522-2   <none>

kubectl cordon aks-nodepool1-27870522-2
node/aks-nodepool1-27870522-2 cordoned

kubectl get nodes
NAME                       STATUS                     ROLES     AGE       VERSION
aks-nodepool1-27870522-0   Ready                      agent     2d        v1.11.2
aks-nodepool1-27870522-1   Ready                      agent     2d        v1.11.2
aks-nodepool1-27870522-2   Ready,SchedulingDisabled   agent     2d        v1.11.2

kubectl drain aks-nodepool1-27870522-2 --delete-local-data --ignore-daemonsets
pod/nginx-68b97d6768-5h67q evicted

Once our pod is up and running on another node, exec into it and you'll see the data has been persisted.

kubectl exec nginx-91g11f6394-3g21h -it -- bash 
cat /usr/share/nginx/html/index.html 
This is a persistence test on Sun Sep 30 15:39:53 UTC 2018