CJE期刊latex模板的表格默认是单栏显示,而且期刊没有给相应的模板使用说。此外,其表格采用的自定义的表格形式,导致无法按照国际期刊的方法进行修改。
需求:更改cls文件中\astable的自定义方法,使得表格可以跨栏显示。
提问于:
浏览数:
2225
1 回答
0
没看懂你的这个代码。
`multicols` 环境,是模板里带的,还是你自己加的?
回答: 2019-11-03 16:45
```tex
%\documentclass{article}
\documentclass{dianzixuebao}
\newcommand{\MDIyear}{xxxx}%年
\newcommand{\MDImonth}{xx}%月
\newcommand{\MDIissuevolume}{xx}%卷
\newcommand{\MDIissuenumber}{xx}%期
\newcommand{\MDIshorttitle}{Fers}
\usepackage{newfloat,caption}
\usepackage{subcaption}
\usepackage{graphicx}
\usepackage[svgnames]{xcolor}
\usepackage{multicol}
\usepackage{multirow}
\usepackage{booktabs}
\usepackage{tabularx}
\usepackage{array}
\usepackage{booktabs} % For formal tables
\usepackage{xcolor}
\usepackage{amssymb}
\usepackage{graphicx}
\usepackage{textcomp}
\usepackage{tabularx}
\usepackage[misc,geometry]{ifsym} %use a small envelope with superscript right after the person's name to denote corresponding authorship
\usepackage{array}
\begin{document}
\begin{multicols}{2}
To compare the image throughput performance of data parallel training and inference of DNN models on the experimental cluster with the corresponding image throughput performance obtained with the CUDA-enabled GPU workstation, CUDA-accelerated and cuDNN-accelerated DNNs are also implemented.
For comparison, Figure \ref{fig:clusterthr} shows the image throughput of data parallel training and inference of YOLOv3, ResNet-152 and DenseNet-201 on the experimental ARMv8 CPU cluster and the GPU workstation. \texttt{Train\_FTCL} and \texttt{Inference \_FTCL} denote the image throughput realized by using the proposed FTCL-Darknet framework on the experimental many-core CPU cluster for the training and inference of DNN models independently. \texttt{Train\_CUDA\_1080Ti} and \texttt{Inference \_CUDA\_1080Ti} denote the image throughput obtained with the CUDA-accelerated Darknet on the GPU workstation without using cuDNN, while \texttt{Train\_CUDNN\_1080Ti} and \texttt{Inference\_CUDNN\_1080Ti} denote the image throughput achieved with the cuDNN-accelerated Darknet on the GPU workstation.
The data parallel training performance of YOLOv3, ResNet-152 and DenseNet-201 on the experimental ARMv8 many-core CPU cluster reach 1.3, 2.5 and 2.8 image/s respectively. They nearly achieve 16.1\% of the training performance obtained on the CUDA-enabled GPU workstation on average, and approximately achieve 3.8\%, 7.9\% and 7.4\% of the training performance accelerated by using the cuDNN-enabled GPU workstation respectively.
On the other hand, the parallel inference performance achieves 7.1, 6.2 and 5.9 images/s respectively. They achieve 17.6\% of the inference performance obtained on the CUDA-enabled GPU workstation on average, and approximately achieve 14.3\%, 16.1\% and 15.3\% of the inference performance accelerated by using the cuDNN-enabled GPU workstation.
\end{multicols}
\astable{
\astabletitle{\bf Table 1.\ DNN models and datasets}
\astableobj{
\setlength{\tabcolsep}{3mm}{
\begin{tabular}{l l c l c c}
%\begin{tabular}{lp{7em} p{3em} p{3em} p{em} p{9em}}
\toprule
DNN &Input size &Batchsize &Convolution layers &Dataset \\
\midrule
YOLOv3 \cite{Redmon2018p} &$416\times416$ &64 &75/107 &MS COCO2014\\
\midrule
ResNet-152 \cite{He20162ICoCVaPRC770} &$256\times256$ &256 &152/206 &ImageNet2012\\
\midrule
DenseNet-201 \cite{Huang2017ICoCVaPRC2261} &$256\times256$ &256 &201/305 &ImageNet2012\\
\bottomrule
\end{tabular}%
}
}
}
\begin{multicols}{2}
To compare the image throughput performance of data parallel training and inference of DNN models on the experimental cluster with the corresponding image throughput performance obtained with the CUDA-enabled GPU workstation, CUDA-accelerated and cuDNN-accelerated DNNs are also implemented.
For comparison, Figure \ref{fig:clusterthr} shows the image throughput of data parallel training and inference of YOLOv3, ResNet-152 and DenseNet-201 on the experimental ARMv8 CPU cluster and the GPU workstation. \texttt{Train\_FTCL} and \texttt{Inference \_FTCL} denote the image throughput realized by using the proposed FTCL-Darknet framework on the experimental many-core CPU cluster for the training and inference of DNN models independently. \texttt{Train\_CUDA\_1080Ti} and \texttt{Inference \_CUDA\_1080Ti} denote the image throughput obtained with the CUDA-accelerated Darknet on the GPU workstation without using cuDNN, while \texttt{Train\_CUDNN\_1080Ti} and \texttt{Inference\_CUDNN\_1080Ti} denote the image throughput achieved with the cuDNN-accelerated Darknet on the GPU workstation.
The data parallel training performance of YOLOv3, ResNet-152 and DenseNet-201 on the experimental ARMv8 many-core CPU cluster reach 1.3, 2.5 and 2.8 image/s respectively. They nearly achieve 16.1\% of the training performance obtained on the CUDA-enabled GPU workstation on average, and approximately achieve 3.8\%, 7.9\% and 7.4\% of the training performance accelerated by using the cuDNN-enabled GPU workstation respectively.
On the other hand, the parallel inference performance achieves 7.1, 6.2 and 5.9 images/s respectively. They achieve 17.6\% of the inference performance obtained on the CUDA-enabled GPU workstation on average, and approximately achieve 14.3\%, 16.1\% and 15.3\% of the inference performance accelerated by using the cuDNN-enabled GPU workstation.
\end{multicols}
\end{document}
```
-
回复 nudtbx :看代码 – 啸行 2019-11-03 16:46 回复
-
我是这样子的,但是不知道怎么实现 – nudtbx 2019-11-03 16:44 回复
-
回复 啸行 :我是这样子的,但是不知道怎么操作 – nudtbx 2019-11-03 16:43 回复
-
回复 nudtbx :加表格之前把 multicols 环境停下,表格结束后再开个 multicols 环境…… – 啸行 2019-11-03 16:43 回复
-
multicols 是模板里面自带的,因为有这个东西,所以 经典的 table 环境根本无效(不显示表格) – nudtbx 2019-11-03 16:35 回复
你的回答
请登录后回答
你的回答将会帮助更多人,请务必认真回答问题。