Understand and Accelerate Memory Processing Pipeline for Disaggregated LLM Inference