We are working on learning agents that can automate semi-repetitive graphical procedures by watching a user perform the procedures on concrete visual examples. The agent is embedded in a graphical editor that records user actions, and uses machine learning techniques to generalize the procedure. The generated procedure can be used in new situations that are similar to, but not necessarily exactly the same as, the original. The user can give advice to the system by drawing graphical annotations that express information like part/whole relations, or can use speech recognition to explain actions verbally.