MCiteBench / data_upload_test.jsonl
caiyuhu's picture
Upload data_upload_test.jsonl
e46d19a verified
raw
history blame
53.6 kB
{"question_id": "a8a6339a943fa79ae72382fb9f1d022d8409510904d542963b580682babf239b", "pdf_id": "d969953a0cbdd7fa8485cf1555a32f7b3d62a7a4", "url": "https://openreview.net/forum?id=7Ab1Uck1Pq", "question_type": "explanation", "question": "Why is the performance of your method better on paraphrased datasets than on the Normal Dataset?", "answer": "Regarding the occasionally better performance of Profiler (and also other baselines) on paraphrased datasets in Table 1 and Table 2, it is important to note that these are in-distribution results, where the training and test data distributions are the same. When detectors are tested in an out-of-distribution setting—where the detector is trained on the original dataset and tested on the paraphrased dataset—all detectors exhibit a performance degradation, as shown in Figure 4. The improved performance on paraphrased datasets under the in-distribution setting suggests that paraphrased data is more separable in this context. We attribute this to two main reasons: (1) paraphrasing may inadvertently expose more model-specific characteristics, and (2) different LLMs may interpret and encode patterns of human-written texts differently, thereby reducing detection complexity. However, the performance drop observed in the out-of-distribution setting indicates that paraphrasing remains an effective evasion technique in real-world deployments.", "evidence_keys": ["Table 1", "Table 2", "Figure 4"], "evidence_contents": ["images/28a3fb2eb860b2336250a168e4be3a619b992036b649ac8490cf8856010977aa.jpg", "images/66b65d6ef25661f61096bcb5ba493ecd43565a459b1d58ac7031289ca040692c.jpg", "images/ee2afb98560f1235c1389cdfa71d968a022447de87c1ce4638ab8d2f07577b6b.jpg"], "evidence_modal": ["mixed"], "evidence_count": 3, "distractor_count": 2, "info_count": 5, "text_2_idx": {"where V is the vocabulary of the surrogate model M, and P˜k ∈R||V ||×1 is the one-hot encoded vector of input token xk over the vocabulary list V . The calculated context losses L = [L1, · · · , LW ] are then used in the next stage to extract the inference pattern. ": "1"}, "idx_2_text": {"1": "where V is the vocabulary of the surrogate model M, and P˜k ∈R||V ||×1 is the one-hot encoded vector of input token xk over the vocabulary list V . The calculated context losses L = [L1, · · · , LW ] are then used in the next stage to extract the inference pattern. "}, "image_2_idx": {"images/6aab4e92c975eda2500335b4f5cd9ae1c46b56dc872b79f30e841caf88d27a56.jpg": "1", "images/ee2afb98560f1235c1389cdfa71d968a022447de87c1ce4638ab8d2f07577b6b.jpg": "4"}, "idx_2_image": {"1": "images/6aab4e92c975eda2500335b4f5cd9ae1c46b56dc872b79f30e841caf88d27a56.jpg", "4": "images/ee2afb98560f1235c1389cdfa71d968a022447de87c1ce4638ab8d2f07577b6b.jpg"}, "table_2_idx": {"images/28a3fb2eb860b2336250a168e4be3a619b992036b649ac8490cf8856010977aa.jpg": "1", "images/66b65d6ef25661f61096bcb5ba493ecd43565a459b1d58ac7031289ca040692c.jpg": "2"}, "idx_2_table": {"1": "images/28a3fb2eb860b2336250a168e4be3a619b992036b649ac8490cf8856010977aa.jpg", "2": "images/66b65d6ef25661f61096bcb5ba493ecd43565a459b1d58ac7031289ca040692c.jpg"}, "meta_data": {""}, "distractor_contents": ["images/6aab4e92c975eda2500335b4f5cd9ae1c46b56dc872b79f30e841caf88d27a56.jpg", "where V is the vocabulary of the surrogate model M, and P˜k ∈R||V ||×1 is the one-hot encoded vector of input token xk over the vocabulary list V . The calculated context losses L = [L1, · · · , LW ] are then used in the next stage to extract the inference pattern. "]}
{"question_id": "4ab6d6d8dcdf8b7a45b9b9c864dc3959193bbda43c25d024ee44e0234248444d", "pdf_id": "e2297ed06ca065d361ec3f28961b352c3377db10", "url": "https://openreview.net/forum?id=0QkVAxJ5iZ", "question_type": "explanation", "question": "What improvements does FacLens provide over existing methods?", "answer": "Our work has clear improvements over existing works in practical applications (efficiency beyond performance) due to the following reasons. In Figure 2, we compare the ante-hoc method (FacLens) with post-hoc methods (SAPLMA and INSIDE). Unlike post-hoc methods, which rely on costly answer generation, the ante-hoc method avoids inference costs and controls risks in advance. As shown in Figure 2, despite post-hoc methods having more information (i.e., generated answers), FacLens still performs better. Table 1 shows that FacLens achieves clear performance gains over most baselines. While the performance gains over LoRA and Self-Evaluation are slightly smaller, FacLens significantly outperforms both baselines in terms of training efficiency (see Table 2), which is a crucial factor for practical application.", "evidence_keys": ["Figure 2", "Table 1", "Table 2"], "evidence_contents": ["images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg", "images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg", "images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg"], "evidence_modal": ["mixed"], "evidence_count": 3, "distractor_count": 2, "info_count": 5, "text_2_idx": {"Unsupervised domain adaptation performs well for cross-LLM FacLens. Given an LLM, we train FacLens on the training data of the corresponding domain and directly test the FacLens on the test data of another domain. The results in the upper part of Figure 6 are unsatisfactory. After unsupervised domain adaptation, the cross-LLM FacLens can work well in the target domain, as depicted in the the lower part of Figure 6. We also discuss the choice of the kernel function in Appendix G, and find that linear kernel performs well, indicating that the NFP features derived by genc are inherently discriminative. Furthermore, we observe that FacLens demonstrates better transferability between LLMs of similar scales. In future work, we will explore more effective methods to enhance FacLens’s transferability between LLMs of significantly different scales. ": "1", "NFP Dataset Construction. Given an LLM m and a QA dataset, for each question q ∈Q, we assign a binary label y to the (m, q) pair, where y = 1 if m fails to generate the golden answer for q, and y = 0 otherwise. The goal of NFP is to predict the labels prior to answer generation. Specifically, we follow previous work (Mallen et al., 2023) to adopt QA datasets with short answers like entity mentions, and mark an LLM’s response as non-factual (i.e., y = 1) if no sub-string of the response matches any of the gold answers.2 To ensure the experimental reproducibility, we set the LLM’s decoding strategy to greedy search rather than top-p or top-k sampling. We have also run the sampling-based decoding for response generation, and find that all the experimental conclusions in this paper still hold true. In this work, we consider four LLMs and three QA datasets, which results in 4 × 3 = 12 NFP datasets. In each NFP dataset, consisting of samples in the form of ((m, q), y), we randomly sample 20% samples for training, 10% samples for validation, and use the remaining samples for testing. ": "2"}, "idx_2_text": {"1": "Unsupervised domain adaptation performs well for cross-LLM FacLens. Given an LLM, we train FacLens on the training data of the corresponding domain and directly test the FacLens on the test data of another domain. The results in the upper part of Figure 6 are unsatisfactory. After unsupervised domain adaptation, the cross-LLM FacLens can work well in the target domain, as depicted in the the lower part of Figure 6. We also discuss the choice of the kernel function in Appendix G, and find that linear kernel performs well, indicating that the NFP features derived by genc are inherently discriminative. Furthermore, we observe that FacLens demonstrates better transferability between LLMs of similar scales. In future work, we will explore more effective methods to enhance FacLens’s transferability between LLMs of significantly different scales. ", "2": "NFP Dataset Construction. Given an LLM m and a QA dataset, for each question q ∈Q, we assign a binary label y to the (m, q) pair, where y = 1 if m fails to generate the golden answer for q, and y = 0 otherwise. The goal of NFP is to predict the labels prior to answer generation. Specifically, we follow previous work (Mallen et al., 2023) to adopt QA datasets with short answers like entity mentions, and mark an LLM’s response as non-factual (i.e., y = 1) if no sub-string of the response matches any of the gold answers.2 To ensure the experimental reproducibility, we set the LLM’s decoding strategy to greedy search rather than top-p or top-k sampling. We have also run the sampling-based decoding for response generation, and find that all the experimental conclusions in this paper still hold true. In this work, we consider four LLMs and three QA datasets, which results in 4 × 3 = 12 NFP datasets. In each NFP dataset, consisting of samples in the form of ((m, q), y), we randomly sample 20% samples for training, 10% samples for validation, and use the remaining samples for testing. "}, "image_2_idx": {"images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg": "2"}, "idx_2_image": {"2": "images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg"}, "table_2_idx": {"images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg": "2", "images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg": "1"}, "idx_2_table": {"2": "images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg", "1": "images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg"}, "meta_data": {""}, "distractor_contents": ["NFP Dataset Construction. Given an LLM m and a QA dataset, for each question q ∈Q, we assign a binary label y to the (m, q) pair, where y = 1 if m fails to generate the golden answer for q, and y = 0 otherwise. The goal of NFP is to predict the labels prior to answer generation. Specifically, we follow previous work (Mallen et al., 2023) to adopt QA datasets with short answers like entity mentions, and mark an LLM’s response as non-factual (i.e., y = 1) if no sub-string of the response matches any of the gold answers.2 To ensure the experimental reproducibility, we set the LLM’s decoding strategy to greedy search rather than top-p or top-k sampling. We have also run the sampling-based decoding for response generation, and find that all the experimental conclusions in this paper still hold true. In this work, we consider four LLMs and three QA datasets, which results in 4 × 3 = 12 NFP datasets. In each NFP dataset, consisting of samples in the form of ((m, q), y), we randomly sample 20% samples for training, 10% samples for validation, and use the remaining samples for testing. ", "Unsupervised domain adaptation performs well for cross-LLM FacLens. Given an LLM, we train FacLens on the training data of the corresponding domain and directly test the FacLens on the test data of another domain. The results in the upper part of Figure 6 are unsatisfactory. After unsupervised domain adaptation, the cross-LLM FacLens can work well in the target domain, as depicted in the the lower part of Figure 6. We also discuss the choice of the kernel function in Appendix G, and find that linear kernel performs well, indicating that the NFP features derived by genc are inherently discriminative. Furthermore, we observe that FacLens demonstrates better transferability between LLMs of similar scales. In future work, we will explore more effective methods to enhance FacLens’s transferability between LLMs of significantly different scales. "]}
{"question_id": "670d6826b93a707dab76d21a73b5c691457ec286bcc186606cd4c02327464670", "pdf_id": "e2297ed06ca065d361ec3f28961b352c3377db10", "url": "https://openreview.net/forum?id=0QkVAxJ5iZ", "question_type": "explanation", "question": "How does FacLens compare to previous methods in terms of performance?", "answer": "Table 1 shows that FacLens achieves clear performance gains over most baselines. While the performance gains over LoRA and Self-Evaluation are slightly smaller, FacLens significantly outperforms both of them in terms of training efficiency (see Table 2), which is crucial for practical applications. Moreover, as shown in Figure 2, we compared FacLens with post-hoc methods. Despite post-hoc methods having access to additional information (i.e., the generated answers), FacLens still performs better.", "evidence_keys": ["Table 1", "Table 2", "Figure 2"], "evidence_contents": ["images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg", "images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg", "images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg"], "evidence_modal": ["mixed"], "evidence_count": 3, "distractor_count": 2, "info_count": 5, "text_2_idx": {"where zS,i = genc (xS,i) , zT,j = genc (xT,j), NS = NT = |Qtrain| is the number of questions for training, and k (·) denotes a kernel function. We discuss the choice of kernel function in Appendix G. The hidden question representations are taken from the middle layer of the LLM. ": "1"}, "idx_2_text": {"1": "where zS,i = genc (xS,i) , zT,j = genc (xT,j), NS = NT = |Qtrain| is the number of questions for training, and k (·) denotes a kernel function. We discuss the choice of kernel function in Appendix G. The hidden question representations are taken from the middle layer of the LLM. "}, "image_2_idx": {"images/add180d7870c649480bd2826bf4b5b054bf92dd72510bcbfde99e0efaf2a9972.jpg": "7", "images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg": "2"}, "idx_2_image": {"7": "images/add180d7870c649480bd2826bf4b5b054bf92dd72510bcbfde99e0efaf2a9972.jpg", "2": "images/3a032e8ef66ebf1569cb7a5f5b30d2f997352c8737325ac4a352e836ccc0b46b.jpg"}, "table_2_idx": {"images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg": "2", "images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg": "1"}, "idx_2_table": {"2": "images/169180e0dc9b1431032dd782a66b4967dd3b610e1bdbbe40e7daf0b1a0519c10.jpg", "1": "images/b864240b4fe713dbd59ab6cd0219dfd87ab698d5d8661c3167be66f1532aa911.jpg"}, "meta_data": {""}, "distractor_contents": ["images/add180d7870c649480bd2826bf4b5b054bf92dd72510bcbfde99e0efaf2a9972.jpg", "where zS,i = genc (xS,i) , zT,j = genc (xT,j), NS = NT = |Qtrain| is the number of questions for training, and k (·) denotes a kernel function. We discuss the choice of kernel function in Appendix G. The hidden question representations are taken from the middle layer of the LLM. "]}
{"question_id": "8caf5a4e8ea45a9c61b2a596fe76417f7aa5a3d875406f1784a388872e17ead8", "pdf_id": "ff04147bfeb3ecdb49c1ad6b729c8776be9205bc", "url": "https://openreview.net/forum?id=jZsN9zo8Qi", "question_type": "explanation", "question": "What analyses have the authors done on how properties of the dataset affect the performance of MLLMs?", "answer": "In Figure 5 of the paper, we present the relationship between the number of images and the accuracy of image association in the IITC task. From the figure, we can see the following: 1. The image association accuracy of the VEGA-base-4k model decreases as the number of images increases. 2. For the other closed-source models, there is also a general negative correlation between the number of images and image association accuracy. The increase in the number of images makes image selection in the IITC task more challenging. We have supplemented the analysis with the relationship between token length and image accuracy. Details can be found in the table above: 1) Statistically, there is a general negative correlation between image accuracy and token length. 2) Due to the uneven distribution of token lengths in the test set (see Figure 4 of the paper), there is a limited amount of test data in the 0-1k and 7-8k ranges (with only 9 and 28 samples, respectively), which may lead to some margin of error in these intervals. 3) As shown in Table 2 of the paper, for all models, the image accuracy in IITC 4k is higher than in IITC 8k, further supporting the negative correlation between accuracy and token length. The increase in context length introduces more redundant information, making image selection more challenging.", "evidence_keys": ["Figure 5", "Figure 4", "Table 2"], "evidence_contents": ["images/e4de0bd64fbf86f5f6d26dd1d132f25e1b4ab75f15d00969dc8b95148cc06e38.jpg", "images/804f68c55932623c3d9dfb50941f9e1f5b2d9f67de4f5c63abcea211bb0f685a.jpg", "images/c253055b0b2a2668877b56f92b77d8e7edba2c56e1e1558d781fe91e3e938c77.jpg"], "evidence_modal": ["mixed"], "evidence_count": 3, "distractor_count": 2, "info_count": 5, "text_2_idx": {"Ultimately, we have developed a novel dataset, designated as VEGA. It is comprised of two subsets, one tailored for the IITC task and another for the ITA task. The longest interleaved image-text content in VEGA reaches up to 8 images and 8k tokens. We design the instruction of the IITC task to be a question about only one of the images, requiring the model to specify the image it refers to in its answer. We assess the model’s interleaved image-text reading comprehension ability by both the correct rate of associated images, and the text quality of the answer by ROUGELin (2004) and BLEU Papineni et al. (2002). We have evaluated several state-of-the-art MLLMs on our dataset, validating the challenge of our tasks. Furthermore, we have fine-tuned the Qwen-VL-Chat model Bai et al. (2023) on the VEGA dataset to set a robust baseline for the IITC task. ": "1"}, "idx_2_text": {"1": "Ultimately, we have developed a novel dataset, designated as VEGA. It is comprised of two subsets, one tailored for the IITC task and another for the ITA task. The longest interleaved image-text content in VEGA reaches up to 8 images and 8k tokens. We design the instruction of the IITC task to be a question about only one of the images, requiring the model to specify the image it refers to in its answer. We assess the model’s interleaved image-text reading comprehension ability by both the correct rate of associated images, and the text quality of the answer by ROUGELin (2004) and BLEU Papineni et al. (2002). We have evaluated several state-of-the-art MLLMs on our dataset, validating the challenge of our tasks. Furthermore, we have fine-tuned the Qwen-VL-Chat model Bai et al. (2023) on the VEGA dataset to set a robust baseline for the IITC task. "}, "image_2_idx": {"images/804f68c55932623c3d9dfb50941f9e1f5b2d9f67de4f5c63abcea211bb0f685a.jpg": "4", "images/31070a6e7c3b53f02d80b85f2a2fffaeba361f9c85891c05adfcb10e733faf05.jpg": "1", "images/e4de0bd64fbf86f5f6d26dd1d132f25e1b4ab75f15d00969dc8b95148cc06e38.jpg": "5"}, "idx_2_image": {"4": "images/804f68c55932623c3d9dfb50941f9e1f5b2d9f67de4f5c63abcea211bb0f685a.jpg", "1": "images/31070a6e7c3b53f02d80b85f2a2fffaeba361f9c85891c05adfcb10e733faf05.jpg", "5": "images/e4de0bd64fbf86f5f6d26dd1d132f25e1b4ab75f15d00969dc8b95148cc06e38.jpg"}, "table_2_idx": {"images/c253055b0b2a2668877b56f92b77d8e7edba2c56e1e1558d781fe91e3e938c77.jpg": "2"}, "idx_2_table": {"2": "images/c253055b0b2a2668877b56f92b77d8e7edba2c56e1e1558d781fe91e3e938c77.jpg"}, "meta_data": {""}, "distractor_contents": ["images/31070a6e7c3b53f02d80b85f2a2fffaeba361f9c85891c05adfcb10e733faf05.jpg", "Ultimately, we have developed a novel dataset, designated as VEGA. It is comprised of two subsets, one tailored for the IITC task and another for the ITA task. The longest interleaved image-text content in VEGA reaches up to 8 images and 8k tokens. We design the instruction of the IITC task to be a question about only one of the images, requiring the model to specify the image it refers to in its answer. We assess the model’s interleaved image-text reading comprehension ability by both the correct rate of associated images, and the text quality of the answer by ROUGELin (2004) and BLEU Papineni et al. (2002). We have evaluated several state-of-the-art MLLMs on our dataset, validating the challenge of our tasks. Furthermore, we have fine-tuned the Qwen-VL-Chat model Bai et al. (2023) on the VEGA dataset to set a robust baseline for the IITC task. "]}
{"question_id": "e103290df88fe0eeb1f60aaf6df31d7daf51b5ff817ce6d7c06e4f19ca381e1f", "pdf_id": "05fe05b0399402d34686a7b695820eaf3b6b5eca", "url": "https://openreview.net/forum?id=Hcb2cgPbMg", "question_type": "explanation", "question": "How does the paper address the marginal improvements observed in the experimental results?", "answer": "Notice that spectral regularization is always amongst the best-performing methods in all experiments. Moreover, in several experiments, spectral regularization was significantly better than any other baseline: Figure 1 (left), Figure 2 (right), Figure 3.", "evidence_keys": ["Figure 1", "Figure 2", "Figure 3"], "evidence_contents": ["images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg", "images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg", "images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg"], "evidence_modal": ["figure"], "evidence_count": 3, "distractor_count": 2, "info_count": 5, "text_2_idx": {"Neural network initialization is key to trainability (He et al., 2015; Hinton and Salakhutdinov, 2006). One property of the initialization thought to be important is that the layerwise mapping, hl+1 = ReLU(θlhl), has a Jacobian with singular values that are close to or exactly one (Glorot and Bengio, 2010; Pennington et al., 2017; Saxe et al., 2014; Xiao et al., 2018). Writing this Jacobian explicitly, we have that Jl = ∂∂hlh+1 = Dlθl where Dl = Diag(ReLU′([θlhl]1), . . . , ReLU′([θlhl]d)). 2 We can obtain upper and lower bounds on the singular values of the layerwise Jacobian in terms of the singular values of the weight matrix. Denoting the ordered singular values of θl and Dl by σd(θl) ≤· · · ≤σ1(θl) and σd(Dl) ≤· · · ≤σ1(Dl), respectively, we have σd(Dl)σi(θl) < σi(Jl) < σ1(Dl)σi(θl) for all i ∈{1, . . . , d} (Zhang, 2011, Theorem 8.13). In particular, if the spectral norm (largest singular value) of the weight matrix θl increases, then the spectral norm of the Jacobian Dl increases as well, potentially impacting trainability. Furthermore, the condition number κ(Jl) = σ1(Jl)/σd(Jl) can be bounded with the product of the condition numbers of θl and Dl, κ(θl) and κ(Dl) as κ(θl)/κ(Dl) ≤κ(Jl) ≤κ(θl)κ(Dl). Thus, if our goal is to keep the singular values of the Jacobian close to one by controlling the singular values of the weight matrix, we should ensure that the condition number of the latter is not too large. ": "1"}, "idx_2_text": {"1": "Neural network initialization is key to trainability (He et al., 2015; Hinton and Salakhutdinov, 2006). One property of the initialization thought to be important is that the layerwise mapping, hl+1 = ReLU(θlhl), has a Jacobian with singular values that are close to or exactly one (Glorot and Bengio, 2010; Pennington et al., 2017; Saxe et al., 2014; Xiao et al., 2018). Writing this Jacobian explicitly, we have that Jl = ∂∂hlh+1 = Dlθl where Dl = Diag(ReLU′([θlhl]1), . . . , ReLU′([θlhl]d)). 2 We can obtain upper and lower bounds on the singular values of the layerwise Jacobian in terms of the singular values of the weight matrix. Denoting the ordered singular values of θl and Dl by σd(θl) ≤· · · ≤σ1(θl) and σd(Dl) ≤· · · ≤σ1(Dl), respectively, we have σd(Dl)σi(θl) < σi(Jl) < σ1(Dl)σi(θl) for all i ∈{1, . . . , d} (Zhang, 2011, Theorem 8.13). In particular, if the spectral norm (largest singular value) of the weight matrix θl increases, then the spectral norm of the Jacobian Dl increases as well, potentially impacting trainability. Furthermore, the condition number κ(Jl) = σ1(Jl)/σd(Jl) can be bounded with the product of the condition numbers of θl and Dl, κ(θl) and κ(Dl) as κ(θl)/κ(Dl) ≤κ(Jl) ≤κ(θl)κ(Dl). Thus, if our goal is to keep the singular values of the Jacobian close to one by controlling the singular values of the weight matrix, we should ensure that the condition number of the latter is not too large. "}, "image_2_idx": {"images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg": "3", "images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg": "1", "images/a3485f80f366e7de3691aaad423ab16f973943002252e73580726f373f1bd657.jpg": "5", "images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg": "2"}, "idx_2_image": {"3": "images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg", "1": "images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg", "5": "images/a3485f80f366e7de3691aaad423ab16f973943002252e73580726f373f1bd657.jpg", "2": "images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg"}, "table_2_idx": {}, "idx_2_table": {}, "meta_data": {""}, "distractor_contents": ["Neural network initialization is key to trainability (He et al., 2015; Hinton and Salakhutdinov, 2006). One property of the initialization thought to be important is that the layerwise mapping, hl+1 = ReLU(θlhl), has a Jacobian with singular values that are close to or exactly one (Glorot and Bengio, 2010; Pennington et al., 2017; Saxe et al., 2014; Xiao et al., 2018). Writing this Jacobian explicitly, we have that Jl = ∂∂hlh+1 = Dlθl where Dl = Diag(ReLU′([θlhl]1), . . . , ReLU′([θlhl]d)). 2 We can obtain upper and lower bounds on the singular values of the layerwise Jacobian in terms of the singular values of the weight matrix. Denoting the ordered singular values of θl and Dl by σd(θl) ≤· · · ≤σ1(θl) and σd(Dl) ≤· · · ≤σ1(Dl), respectively, we have σd(Dl)σi(θl) < σi(Jl) < σ1(Dl)σi(θl) for all i ∈{1, . . . , d} (Zhang, 2011, Theorem 8.13). In particular, if the spectral norm (largest singular value) of the weight matrix θl increases, then the spectral norm of the Jacobian Dl increases as well, potentially impacting trainability. Furthermore, the condition number κ(Jl) = σ1(Jl)/σd(Jl) can be bounded with the product of the condition numbers of θl and Dl, κ(θl) and κ(Dl) as κ(θl)/κ(Dl) ≤κ(Jl) ≤κ(θl)κ(Dl). Thus, if our goal is to keep the singular values of the Jacobian close to one by controlling the singular values of the weight matrix, we should ensure that the condition number of the latter is not too large. ", "images/a3485f80f366e7de3691aaad423ab16f973943002252e73580726f373f1bd657.jpg"]}
{"question_id": "22132795dd4d718836bcea76aa5a9ee27154f136067d4d67d1e043271a66c6a1", "pdf_id": "05fe05b0399402d34686a7b695820eaf3b6b5eca", "url": "https://openreview.net/forum?id=Hcb2cgPbMg", "question_type": "explanation", "question": "What improvements does spectral regularization provide over L2 regularization?", "answer": "Empirically, spectral regularization is a large improvement over L2 regularization in several of our experiments, e.g. Figure 1 (left), Figure 2 (right), and Figure 3. Moreover, spectral regularization is more robust to its hyperparameter and always among the 1 or 2 best-performing methods in all of our experiments.", "evidence_keys": ["Figure 1", "Figure 2", "Figure 3"], "evidence_contents": ["images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg", "images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg", "images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg"], "evidence_modal": ["figure"], "evidence_count": 3, "distractor_count": 2, "info_count": 5, "text_2_idx": {"Loss of Trainability Mitigators In our main results, we compare spectral regularization against L2 regularization towards zero, shrink and perturb (Ash and Adams, 2020), L2 regularization towards the initialization (Kumar et al., 2023), recycling dormant neurons (ReDO, Sokar et al., 2023), concatenated ReLU (Abbas et al., 2023; Shang et al., 2016), and Wasserstein regularization (Lewandowski et al., 2023). Several regularizers in the continual learning without forgetting literature rely on privileged task information, which is not applicable to the task-agnostic setting that we consider. We use the streaming conversion (Elsayed and Mahmood, 2024) to transform elastic weight consolidation (Kirkpatrick et al., 2017; Zenke et al., 2017), so that it no longer requires task boundary information, and include it as a baseline. Additional experiment details can be found in Appendix B. ": "1", "Loss of plasticity in the continual learning literature can refer to either loss of trainability (Dohare et al., 2021; Lyle et al., 2023) or to loss of generalization (Ash and Adams, 2020). Because trainability is a requirement for learning and generalization, we focus primarily on loss of trainability. Specifically, we use loss of trainability to refer to the phenomenon that the objective value, Jτ(θ(τT )), increases as a function of the task τ. Equivalently, the performance measures, such as accuracy, decrease with new tasks. Under the assumption that the tasks are sampled independently and identically, this would suggest that the neural network’s trainability diminishes on new tasks. ": "2"}, "idx_2_text": {"1": "Loss of Trainability Mitigators In our main results, we compare spectral regularization against L2 regularization towards zero, shrink and perturb (Ash and Adams, 2020), L2 regularization towards the initialization (Kumar et al., 2023), recycling dormant neurons (ReDO, Sokar et al., 2023), concatenated ReLU (Abbas et al., 2023; Shang et al., 2016), and Wasserstein regularization (Lewandowski et al., 2023). Several regularizers in the continual learning without forgetting literature rely on privileged task information, which is not applicable to the task-agnostic setting that we consider. We use the streaming conversion (Elsayed and Mahmood, 2024) to transform elastic weight consolidation (Kirkpatrick et al., 2017; Zenke et al., 2017), so that it no longer requires task boundary information, and include it as a baseline. Additional experiment details can be found in Appendix B. ", "2": "Loss of plasticity in the continual learning literature can refer to either loss of trainability (Dohare et al., 2021; Lyle et al., 2023) or to loss of generalization (Ash and Adams, 2020). Because trainability is a requirement for learning and generalization, we focus primarily on loss of trainability. Specifically, we use loss of trainability to refer to the phenomenon that the objective value, Jτ(θ(τT )), increases as a function of the task τ. Equivalently, the performance measures, such as accuracy, decrease with new tasks. Under the assumption that the tasks are sampled independently and identically, this would suggest that the neural network’s trainability diminishes on new tasks. "}, "image_2_idx": {"images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg": "3", "images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg": "1", "images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg": "2"}, "idx_2_image": {"3": "images/99156fcd57bbe834cdf486d0f7684362f9231d5f09b81a4e5686dc70589ef3c9.jpg", "1": "images/cc3e375bd58a5db5d5bfb40f3e0e6e18698bfc87b9e91e9c19f39f395a094447.jpg", "2": "images/eae1f91f604a2b20a638a94f3ad6b7ae424d1a59ccfeab1575bd54f87cd3f353.jpg"}, "table_2_idx": {}, "idx_2_table": {}, "meta_data": {""}, "distractor_contents": ["Loss of Trainability Mitigators In our main results, we compare spectral regularization against L2 regularization towards zero, shrink and perturb (Ash and Adams, 2020), L2 regularization towards the initialization (Kumar et al., 2023), recycling dormant neurons (ReDO, Sokar et al., 2023), concatenated ReLU (Abbas et al., 2023; Shang et al., 2016), and Wasserstein regularization (Lewandowski et al., 2023). Several regularizers in the continual learning without forgetting literature rely on privileged task information, which is not applicable to the task-agnostic setting that we consider. We use the streaming conversion (Elsayed and Mahmood, 2024) to transform elastic weight consolidation (Kirkpatrick et al., 2017; Zenke et al., 2017), so that it no longer requires task boundary information, and include it as a baseline. Additional experiment details can be found in Appendix B. ", "Loss of plasticity in the continual learning literature can refer to either loss of trainability (Dohare et al., 2021; Lyle et al., 2023) or to loss of generalization (Ash and Adams, 2020). Because trainability is a requirement for learning and generalization, we focus primarily on loss of trainability. Specifically, we use loss of trainability to refer to the phenomenon that the objective value, Jτ(θ(τT )), increases as a function of the task τ. Equivalently, the performance measures, such as accuracy, decrease with new tasks. Under the assumption that the tasks are sampled independently and identically, this would suggest that the neural network’s trainability diminishes on new tasks. "]}
{"question_id": "c8f71f59ce47e86848347df22d37552cc7e4d12d8bf81a5447d0338086cffd33", "pdf_id": "5aa218287d89432e6fc34652ca252cfe99d92e21", "url": "https://openreview.net/forum?id=aOiKt5b0NA", "question_type": "explanation", "question": "How are passenger profiles integrated into the origin-destination matrix at the regional or stop level?", "answer": "As shown in Figure 1(d), a walking distance is deemed acceptable if it is limited to 1.1 km. Concerning the average velocity, Figure 1(e), and the trip time, Figure 1(f), all registers with values greater than 80 km/h and 2 hours are unconsidered. These values were estimated by local specialists based on the passengers’ usage patterns and the transportation infrastructure in Salvador. We emphasize that the reader can modify these values according to their needs once both raw and processed data are shared.", "evidence_keys": ["Figure 1"], "evidence_contents": ["images/c5a31d9a40f25518ecd6eaeca1e01c4b3acbd744bcdc450d17a22bfd197dd041.jpg"], "evidence_modal": ["figure"], "evidence_count": 1, "distractor_count": 4, "info_count": 5, "text_2_idx": {"In the subsequent phase, Figure 1(c), we analyzed user types to determine the feasibility of estimating their alighting points. In Salvador, there is no device to validate the passengers’ alighting; therefore, the main challenge is to estimate it by analyzing the following boarding. Moreover, it is impossible to track older people because they are not individually identified. According to local policies, the fares for such passengers are recorded as general users without identification. Consequently, we are unable to estimate their alighting points. Another particular case that prevents us from identifying users’ alighting points occurs when there is only a single trip registration on a given day. In such cases, we can only determine the boarding point, with no information available about the alighting point. Therefore, we cannot consider such situations in our analyses. ": "1", "In our context, spatial data do not depend on time t, i.e., their information is time-invariant. Specifically, in every vertex vi ∈V , we store the following features: geographical position, number of boarding and alighting per vehicle, and passenger load. The features specifically concerning edges (vi, vj) ∈E include the distance between stops and stations, the trip duration, the mean velocity, and the Renovation Factor (RF). The RF is a well-known metric used in transportation research to assess the total demand in a line, i.e., it is computed on a set of edges that belong to the line [ITDP, 2016]. Formally, this metric is the ratio of the total demand of a line to the load on its critical link. Higher renovation factors occur when there are many short trips along the line. Corridors with very high renovation factor rates are more proftiable because they handle the same number of paying customers with fewer vehicles [ITDP, 2016]. Besides the individual features, there is relevant information shared by both vertices and edges, such as the number of passengers per vehicle, lines and directions, vehicle characteristics, altitude, and trips. ": "2", "All information shared by SUNT was collected from March 2024 to July 2024 and aggregated into 5-minute intervals. This interval allows the data to be represented as a temporal graph, in addition to the spatial information. However, we emphasize that this interval can be adjusted according to the readers’ requirements. It is possible to work with a static graph using a single interval or to summarize all days using, for example, a mean function. Additional details about all the data comprising SUNT are available in Apendix B. ": "3"}, "idx_2_text": {"1": "In the subsequent phase, Figure 1(c), we analyzed user types to determine the feasibility of estimating their alighting points. In Salvador, there is no device to validate the passengers’ alighting; therefore, the main challenge is to estimate it by analyzing the following boarding. Moreover, it is impossible to track older people because they are not individually identified. According to local policies, the fares for such passengers are recorded as general users without identification. Consequently, we are unable to estimate their alighting points. Another particular case that prevents us from identifying users’ alighting points occurs when there is only a single trip registration on a given day. In such cases, we can only determine the boarding point, with no information available about the alighting point. Therefore, we cannot consider such situations in our analyses. ", "2": "In our context, spatial data do not depend on time t, i.e., their information is time-invariant. Specifically, in every vertex vi ∈V , we store the following features: geographical position, number of boarding and alighting per vehicle, and passenger load. The features specifically concerning edges (vi, vj) ∈E include the distance between stops and stations, the trip duration, the mean velocity, and the Renovation Factor (RF). The RF is a well-known metric used in transportation research to assess the total demand in a line, i.e., it is computed on a set of edges that belong to the line [ITDP, 2016]. Formally, this metric is the ratio of the total demand of a line to the load on its critical link. Higher renovation factors occur when there are many short trips along the line. Corridors with very high renovation factor rates are more proftiable because they handle the same number of paying customers with fewer vehicles [ITDP, 2016]. Besides the individual features, there is relevant information shared by both vertices and edges, such as the number of passengers per vehicle, lines and directions, vehicle characteristics, altitude, and trips. ", "3": "All information shared by SUNT was collected from March 2024 to July 2024 and aggregated into 5-minute intervals. This interval allows the data to be represented as a temporal graph, in addition to the spatial information. However, we emphasize that this interval can be adjusted according to the readers’ requirements. It is possible to work with a static graph using a single interval or to summarize all days using, for example, a mean function. Additional details about all the data comprising SUNT are available in Apendix B. "}, "image_2_idx": {"images/c5a31d9a40f25518ecd6eaeca1e01c4b3acbd744bcdc450d17a22bfd197dd041.jpg": "1", "images/23f1fdc539186d67330f172d2edf9ee702c9e85cdedb350bee9c37ac4c5cfed4.jpg": "3"}, "idx_2_image": {"1": "images/c5a31d9a40f25518ecd6eaeca1e01c4b3acbd744bcdc450d17a22bfd197dd041.jpg", "3": "images/23f1fdc539186d67330f172d2edf9ee702c9e85cdedb350bee9c37ac4c5cfed4.jpg"}, "table_2_idx": {}, "idx_2_table": {}, "meta_data": {""}, "distractor_contents": ["images/23f1fdc539186d67330f172d2edf9ee702c9e85cdedb350bee9c37ac4c5cfed4.jpg", "In the subsequent phase, Figure 1(c), we analyzed user types to determine the feasibility of estimating their alighting points. In Salvador, there is no device to validate the passengers’ alighting; therefore, the main challenge is to estimate it by analyzing the following boarding. Moreover, it is impossible to track older people because they are not individually identified. According to local policies, the fares for such passengers are recorded as general users without identification. Consequently, we are unable to estimate their alighting points. Another particular case that prevents us from identifying users’ alighting points occurs when there is only a single trip registration on a given day. In such cases, we can only determine the boarding point, with no information available about the alighting point. Therefore, we cannot consider such situations in our analyses. ", "All information shared by SUNT was collected from March 2024 to July 2024 and aggregated into 5-minute intervals. This interval allows the data to be represented as a temporal graph, in addition to the spatial information. However, we emphasize that this interval can be adjusted according to the readers’ requirements. It is possible to work with a static graph using a single interval or to summarize all days using, for example, a mean function. Additional details about all the data comprising SUNT are available in Apendix B. ", "In our context, spatial data do not depend on time t, i.e., their information is time-invariant. Specifically, in every vertex vi ∈V , we store the following features: geographical position, number of boarding and alighting per vehicle, and passenger load. The features specifically concerning edges (vi, vj) ∈E include the distance between stops and stations, the trip duration, the mean velocity, and the Renovation Factor (RF). The RF is a well-known metric used in transportation research to assess the total demand in a line, i.e., it is computed on a set of edges that belong to the line [ITDP, 2016]. Formally, this metric is the ratio of the total demand of a line to the load on its critical link. Higher renovation factors occur when there are many short trips along the line. Corridors with very high renovation factor rates are more proftiable because they handle the same number of paying customers with fewer vehicles [ITDP, 2016]. Besides the individual features, there is relevant information shared by both vertices and edges, such as the number of passengers per vehicle, lines and directions, vehicle characteristics, altitude, and trips. "]}
{"question_id": "28348747626e6364ef8ed1d3cf3ae2a27e837a31c9e72c81cf34fc34a077ec92", "pdf_id": "5f4382c8b4eb16e5bc379f3c02f21f53318dbacb", "url": "https://openreview.net/forum?id=Bp2axGAs18", "question_type": "explanation", "question": "What is the rationale for the experimental configurations chosen in the study?", "answer": "Figure 4 (MAD): This figure focuses on a case study demonstrating a counter-intuitive phenomenon where introducing errors can improve performance—a rare observation in multi-agent systems. MAD was selected specifically for its relevance to this unique insight. Figure 7a (Exclusion of MAD): MAD was excluded from Figure 7a because this experiment involves scenarios with malicious instruction-sending agents, which are not present in the MAD system configuration. Figure 8 (Self-collab and Camel): Only Self-collab and Camel are included in Figure 8 because they represent the weaker systems within the Linear and Flat structures, respectively. Our objective in this experiment is to illustrate how our proposed defense method enhances resilience in weaker systems. To provide greater clarity on our multi-agent system settings, we have added a comprehensive table summarizing the experimental configurations.", "evidence_keys": ["Figure 4", "Figure 7", "Figure 8"], "evidence_contents": ["images/0b55b386169f13d50b1b7ff47bfa61c9126516d2fca0fe9057685662016c9e22.jpg", "images/c0b3ca79844ea127d86dcdd3fcf3e95955edba6f7897f1c1e732e26c9b917a50.jpg", "images/28fe1291bf019109651723940887ed6ff1e1b4a60028a15b648aaa959d5b622c.jpg"], "evidence_modal": ["figure"], "evidence_count": 3, "distractor_count": 2, "info_count": 5, "text_2_idx": {"Current LLMs prioritize natural language over code. Fig. 6b illustrates that distraction comments can mislead LLMs into accepting incorrect code as correct across all six systems studied. This indicates that the systems tend to prioritize comments over the actual code. In the example, the system detects an error in the code when no comments are present. However, when a comment stating “the bug had been corrected” is added, the system overlooks the error and proceeds with the next task. AUTOTRANSFORM exploits this characteristic of LLMs to execute successful attacks. ": "1"}, "idx_2_text": {"1": "Current LLMs prioritize natural language over code. Fig. 6b illustrates that distraction comments can mislead LLMs into accepting incorrect code as correct across all six systems studied. This indicates that the systems tend to prioritize comments over the actual code. In the example, the system detects an error in the code when no comments are present. However, when a comment stating “the bug had been corrected” is added, the system overlooks the error and proceeds with the next task. AUTOTRANSFORM exploits this characteristic of LLMs to execute successful attacks. "}, "image_2_idx": {"images/0b55b386169f13d50b1b7ff47bfa61c9126516d2fca0fe9057685662016c9e22.jpg": "4", "images/c0b3ca79844ea127d86dcdd3fcf3e95955edba6f7897f1c1e732e26c9b917a50.jpg": "7", "images/28fe1291bf019109651723940887ed6ff1e1b4a60028a15b648aaa959d5b622c.jpg": "8", "images/18cebe9c02b6160708591815b62a39d65034f0db42e6f9495510e4a203c2c009.jpg": "1"}, "idx_2_image": {"4": "images/0b55b386169f13d50b1b7ff47bfa61c9126516d2fca0fe9057685662016c9e22.jpg", "7": "images/c0b3ca79844ea127d86dcdd3fcf3e95955edba6f7897f1c1e732e26c9b917a50.jpg", "8": "images/28fe1291bf019109651723940887ed6ff1e1b4a60028a15b648aaa959d5b622c.jpg", "1": "images/18cebe9c02b6160708591815b62a39d65034f0db42e6f9495510e4a203c2c009.jpg"}, "table_2_idx": {}, "idx_2_table": {}, "meta_data": {""}, "distractor_contents": ["Current LLMs prioritize natural language over code. Fig. 6b illustrates that distraction comments can mislead LLMs into accepting incorrect code as correct across all six systems studied. This indicates that the systems tend to prioritize comments over the actual code. In the example, the system detects an error in the code when no comments are present. However, when a comment stating “the bug had been corrected” is added, the system overlooks the error and proceeds with the next task. AUTOTRANSFORM exploits this characteristic of LLMs to execute successful attacks. ", "images/18cebe9c02b6160708591815b62a39d65034f0db42e6f9495510e4a203c2c009.jpg"]}
{"question_id": "5357a51d9c1a64e442ce83018c4e81ed44c53e736a443bd65b61b021ea85c150", "pdf_id": "67ffaaf503d82d0615454baf237f5e5a9ff7bb19", "url": "https://openreview.net/forum?id=Ei9KiIzgxK", "question_type": "explanation", "question": "What evidence supports the claim of improved zero-shot generalization?", "answer": "We respectfully disagree with the reviewer’s assertion that the paper does not demonstrate improved zero-shot generalization, as we show this in Procgen (see aggregate performance added to Table 3). Additionally, we present the FDD approach (Table 2), where we observe improvement in the generalization gap for the DMC environments. That said, we understand that the improved performance in the original environment in Table 1 (not necessarily a bad thing!) could lead to confusion. We are happy to rephrase the title if you have a recommendation. One proposal could be 'Synthetic Data Enables Training Robust Agents from Offline Data,' as our agents perform well across a wide range of settings. We also updated Tables 2 and 3 to include $Test/Train$ and $Train-Test$ results for both environments, aligning with the metrics suggested by the reviewer.", "evidence_keys": ["Table 2", "Table 3", "Table 1"], "evidence_contents": ["images/5569c6a3bfa524742be607e0b74ea0c979027562b132ff607589531c164f2e8b.jpg", "images/dcb7d6d85125826affdf8c728bcb27be4cf1096384a051e10fe381534f2d375b.jpg", "images/4700ceaef17c3682b2201018d66e8a2e1c59985dcb94ee0a1293d3d2c28e41f8.jpg"], "evidence_modal": ["table"], "evidence_count": 3, "distractor_count": 2, "info_count": 5, "text_2_idx": {}, "idx_2_text": {}, "image_2_idx": {"images/d65db754ee4fe5be255f66580d4f92aa03a1107db5b556c3b4ea7d63b56fec34.jpg": "5", "images/8759ff84202e49665b7630122fce5a4391e5fd728dd26bcf8bd79ba128b548f2.jpg": "2"}, "idx_2_image": {"5": "images/d65db754ee4fe5be255f66580d4f92aa03a1107db5b556c3b4ea7d63b56fec34.jpg", "2": "images/8759ff84202e49665b7630122fce5a4391e5fd728dd26bcf8bd79ba128b548f2.jpg"}, "table_2_idx": {"images/4700ceaef17c3682b2201018d66e8a2e1c59985dcb94ee0a1293d3d2c28e41f8.jpg": "1", "images/5569c6a3bfa524742be607e0b74ea0c979027562b132ff607589531c164f2e8b.jpg": "2", "images/dcb7d6d85125826affdf8c728bcb27be4cf1096384a051e10fe381534f2d375b.jpg": "3"}, "idx_2_table": {"1": "images/4700ceaef17c3682b2201018d66e8a2e1c59985dcb94ee0a1293d3d2c28e41f8.jpg", "2": "images/5569c6a3bfa524742be607e0b74ea0c979027562b132ff607589531c164f2e8b.jpg", "3": "images/dcb7d6d85125826affdf8c728bcb27be4cf1096384a051e10fe381534f2d375b.jpg"}, "meta_data": {""}, "distractor_contents": ["images/8759ff84202e49665b7630122fce5a4391e5fd728dd26bcf8bd79ba128b548f2.jpg", "images/d65db754ee4fe5be255f66580d4f92aa03a1107db5b556c3b4ea7d63b56fec34.jpg"]}
{"question_id": "c1bc3c66ef0dee68fef185813dcc321a868969e1fce058e8db05d4896e37025c", "pdf_id": "8b6c738aadc6b44e6ec8736d7e10c499122c0609", "url": "https://openreview.net/forum?id=CbpWPbYHuv", "question_type": "explanation", "question": "Do you have a proof that PolyReLU and PolyNorm have equivalent expressivity?", "answer": "Thank you for pointing out the less precise expression. We have rephrased the sentence as follows: 'From Figure 1, one can see that the expressivity of PolyNorm is greater than or equal to that of PolyReLU.' The claim is primarily supported through the empirical evidence provided in the paper. As can be observed in Figure 1, Figure 6 and Figure 7, both PolyReLU and PolyNorm exhibit superior expressivity in comparison to other activation functions, with PolyNorm demonstrating equal or greater expressive capacity than PolyReLU.", "evidence_keys": ["Figure 1", "Figure 6", "Figure 7"], "evidence_contents": ["images/a4c46101de0b0f13b987de572c9324742705fcb26f894ba6c6254c285adddf1a.jpg", "images/046ab70a2b4e254b1ece36680859bb7ab5fac1877cc75d8c41d950778c8f1046.jpg", "images/b14e1904e8ec3fdee265107c7746e771c19d93de74f082fb6f52c4f54678406b.jpg"], "evidence_modal": ["figure"], "evidence_count": 3, "distractor_count": 2, "info_count": 5, "text_2_idx": {"Hyperparameters. Unless otherwise specified, we use a third-order PolyCom by default and initialize the coefficients as ai = 1/3 for i = 1, 2, 3 and set a0 = 0. Model weights are randomly initialized. For optimization, we apply the AdamW optimizer with β1 = 0.9 and β2 = 0.95. All models are trained on sequences of 4096 tokens. For the dense model, we set the initial learning rate to 3e-4, decaying to 1.5e-5 using a cosine scheduler. The MoE model starts with a learning rate of 4e-4, also decaying according to a cosine schedule. We summary the hyperparameters in Table 7. ": "1"}, "idx_2_text": {"1": "Hyperparameters. Unless otherwise specified, we use a third-order PolyCom by default and initialize the coefficients as ai = 1/3 for i = 1, 2, 3 and set a0 = 0. Model weights are randomly initialized. For optimization, we apply the AdamW optimizer with β1 = 0.9 and β2 = 0.95. All models are trained on sequences of 4096 tokens. For the dense model, we set the initial learning rate to 3e-4, decaying to 1.5e-5 using a cosine scheduler. The MoE model starts with a learning rate of 4e-4, also decaying according to a cosine schedule. We summary the hyperparameters in Table 7. "}, "image_2_idx": {"images/b14e1904e8ec3fdee265107c7746e771c19d93de74f082fb6f52c4f54678406b.jpg": "7", "images/046ab70a2b4e254b1ece36680859bb7ab5fac1877cc75d8c41d950778c8f1046.jpg": "6", "images/824414a9a148b783330a35bbd312329fc253390c4f58a7711bcb9a1d90809da1.jpg": "2", "images/a4c46101de0b0f13b987de572c9324742705fcb26f894ba6c6254c285adddf1a.jpg": "1"}, "idx_2_image": {"7": "images/b14e1904e8ec3fdee265107c7746e771c19d93de74f082fb6f52c4f54678406b.jpg", "6": "images/046ab70a2b4e254b1ece36680859bb7ab5fac1877cc75d8c41d950778c8f1046.jpg", "2": "images/824414a9a148b783330a35bbd312329fc253390c4f58a7711bcb9a1d90809da1.jpg", "1": "images/a4c46101de0b0f13b987de572c9324742705fcb26f894ba6c6254c285adddf1a.jpg"}, "table_2_idx": {}, "idx_2_table": {}, "meta_data": {""}, "distractor_contents": ["images/824414a9a148b783330a35bbd312329fc253390c4f58a7711bcb9a1d90809da1.jpg", "Hyperparameters. Unless otherwise specified, we use a third-order PolyCom by default and initialize the coefficients as ai = 1/3 for i = 1, 2, 3 and set a0 = 0. Model weights are randomly initialized. For optimization, we apply the AdamW optimizer with β1 = 0.9 and β2 = 0.95. All models are trained on sequences of 4096 tokens. For the dense model, we set the initial learning rate to 3e-4, decaying to 1.5e-5 using a cosine scheduler. The MoE model starts with a learning rate of 4e-4, also decaying according to a cosine schedule. We summary the hyperparameters in Table 7. "]}