Two depth snapshots are all we need: one before contact, one during. By reading how the surface swells or dents—how wide the affected patch is, how steep the edges are—we teach a lightweight model to translate raw 3D deformation into what you actually care about: the contact force (direction + magnitude) and the local stiffness. No wearables, no embedded sensors, just an RGB-D camera and geometry doing the talking. It runs in real time, plays nicely with gels, foams, silicone sheets, and rigid spheres alike, and turns “push and guess” into “press and know”—a clean, low-cost bridge from point clouds to touch.