Large language models (LLMs) have demonstrated impressive capabilities in different tasks and are bringing transformative changes to many domains. However, keeping LLM knowledge up-to-date remains a challenge once prior training is completed. Therefore, it is essential to design effective methods both to update obsolete knowledge and to induce new knowledge in LLMs. The existing knowledge editing (KE) method of locating and editing suffers from two limitations. First, post-editing LLMs that use these methods generally have little ability to answer complex queries that require multi-hop reasoning. Second, the long execution time of such localization and editing methods for performing knowledge edits makes it practically unfeasible for large-scale KE. In this article, we explore efficient parameter fine-tuning techniques (PEFT) as an alternative to KE. We select a more complete temporal KE dataset with knowledge injection and updating examples for KE performance benchmarking. Furthermore, we investigate the effect of fine-tuning on a variety of layers in an LLM for the multi-hop QA task. We found that PEFT works better than localization and editing techniques for urgent knowledge edits.