Here some responses to both your questions:
Empirical standard errors are the default in ConQuest. If CQ chooses quick standard errors, there is probably a problem in your model. One issue could be that ConQuest needs a zero category for rating scale responses. If the data is coded as 1,2,3,4,5, you could add the command "score (1,2,3,4,5)(01,2,3,4);"
Empirical standard errors are not necessarily larger or smaller than quick standard errors.
If the model converges, WLE estimates should be (close to) identical each time you run the same model. Plausible value results differ slightly because of the measurment error. Measurement errors are larger for short tests. If differences in results seem large between runs, your model may not have converged well. For most simple models, adding "keeplastest=yes" to the SET command will fix the problem. Without this option, CQ takes the results from the iteration with the lowest deviance. Usually this is the last iteration, but not always, in which case CQ takes the results from an iteration before the parameters were converged. It is important to export a log file and examine if the results are acceptable.
Hope this helps.
Best wishes
Eveline